Recent runs || View in Spyglass
PR | AppalaKarthik: Fix kong ingress name in documentation |
Result | FAILURE |
Tests | 15 failed / 694 succeeded |
Started | |
Elapsed | 38m44s |
Revision | ec3dc26e02c2e6c80f58aee1b997b9a5fcae6b32 |
Refs |
2773 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sCustomResourceConversionWebhook\s\[Privileged\:ClusterAdmin\]\sshould\sbe\sable\sto\sconvert\sa\snon\shomogeneous\slist\sof\sCRs\s\[Conformance\]$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 May 17 14:16:18.978: creating role binding crd-webhook-5115:webhook to access configMap Unexpected error: <*errors.StatusError | 0xc0003a0500>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "Internal error occurred: resource quota evaluation timed out", Reason: "InternalError", Details: { Name: "", Group: "", Kind: "", UID: "", Causes: [ { Type: "", Message: "resource quota evaluation timed out", Field: "", }, ], RetryAfterSeconds: 0, }, Code: 500, }, } Internal error occurred: resource quota evaluation timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:243from junit_19.xml
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 17 14:16:07.941: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename crd-webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let cr conversion webhook read extension-apiserver-authentication May 17 14:16:18.978: FAIL: creating role binding crd-webhook-5115:webhook to access configMap Unexpected error: <*errors.StatusError | 0xc0003a0500>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "Internal error occurred: resource quota evaluation timed out", Reason: "InternalError", Details: { Name: "", Group: "", Kind: "", UID: "", Causes: [ { Type: "", Message: "resource quota evaluation timed out", Field: "", }, ], RetryAfterSeconds: 0, }, Code: 500, }, } Internal error occurred: resource quota evaluation timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.createAuthReaderRoleBindingForCRDConversion(0xc000c298c0, 0xc003704a00, 0x10) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:243 +0x389 k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.1() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:132 +0x132 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00022d200) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00022d200) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc00022d200, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "crd-webhook-5115". �[1mSTEP�[0m: Found 0 events. May 17 14:16:25.592: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:16:25.592: INFO: May 17 14:16:25.739: INFO: Logging node info for node kind-control-plane May 17 14:16:25.795: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 1fc1c875-88c2-4f79-b48f-f6ef32b124da 586 0 2022-05-17 14:12:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-17 14:12:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-17 14:12:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:13:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:97599c4ddadc4d3eae566d304d7d104d,SystemUUID:e786229d-3adc-40a9-a01d-f7a176ae9288,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:25.795: INFO: Logging kubelet events for node kind-control-plane May 17 14:16:25.859: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 17 14:16:26.025: INFO: local-path-provisioner-6c9449b9dd-bllk2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.025: INFO: Container local-path-provisioner ready: true, restart count 0 May 17 14:16:26.025: INFO: coredns-78fcd69978-r97ls started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.025: INFO: Container coredns ready: true, restart count 0 May 17 14:16:26.025: INFO: kube-proxy-5tzjs started at 2022-05-17 14:13:26 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.025: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:26.025: INFO: coredns-78fcd69978-b6zn2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.025: INFO: Container coredns ready: true, restart count 0 May 17 14:16:26.025: INFO: kube-apiserver-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.025: INFO: Container kube-apiserver ready: true, restart count 0 May 17 14:16:26.025: INFO: kube-controller-manager-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.025: INFO: Container kube-controller-manager ready: false, restart count 0 May 17 14:16:26.025: INFO: kube-scheduler-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.025: INFO: Container kube-scheduler ready: false, restart count 0 May 17 14:16:26.025: INFO: kindnet-s6wb7 started at 2022-05-17 14:13:07 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.025: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:26.025: INFO: etcd-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.025: INFO: Container etcd ready: true, restart count 0 May 17 14:16:26.295: INFO: Latency metrics for node kind-control-plane May 17 14:16:26.295: INFO: Logging node info for node kind-worker May 17 14:16:26.373: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5ba69ac9-ccd5-447d-9f27-9bee10b23121 6999 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9914":"kind-worker","csi-mock-csi-mock-volumes-7782":"csi-mock-csi-mock-volumes-7782"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-05-17 14:15:50 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-17 14:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7e8057f975041deb201a063c162c158,SystemUUID:fc207ccb-2e60-4380-b38c-671b606f274a,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155,DevicePath:,},},Config:nil,},} May 17 14:16:26.374: INFO: Logging kubelet events for node kind-worker May 17 14:16:26.482: INFO: Logging pods the kubelet thinks is on node kind-worker May 17 14:16:26.546: INFO: affinity-nodeport-xj24q started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.546: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:26.546: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:15:44 +0000 UTC (0+7 container statuses recorded) May 17 14:16:26.546: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:26.546: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:26.546: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:26.546: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:26.546: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:26.546: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:26.546: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:26.546: INFO: deployment-55649fd747-6dbnz started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.546: INFO: Container nginx ready: false, restart count 0 May 17 14:16:26.546: INFO: send-events-4adc3414-703f-4b62-a42c-926d4b4b6fc2 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.547: INFO: termination-message-container5de5961b-b8bb-4868-adb0-b23802f044ae started at <nil> (0+0 container statuses recorded) May 17 14:16:26.547: INFO: service-proxy-toggled-mlq7q started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.547: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:26.547: INFO: hostexec-kind-worker-6t6pq started at 2022-05-17 14:15:52 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.547: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.547: INFO: agnhost-primary-5db8ddd565-ktltd started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.547: INFO: Container primary ready: true, restart count 0 May 17 14:16:26.547: INFO: agnhost-replica-6bcf79b489-885sh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.547: INFO: Container replica ready: false, restart count 0 May 17 14:16:26.547: INFO: deployment-55649fd747-f5bss started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.547: INFO: Container nginx ready: false, restart count 0 May 17 14:16:26.547: INFO: pod-terminate-status-2-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.547: INFO: service-proxy-disabled-qk4md started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.547: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:26.547: INFO: busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 started at 2022-05-17 14:15:24 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.547: INFO: Container busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 ready: true, restart count 0 May 17 14:16:26.547: INFO: pod-b8564c48-8aa2-4bdb-a5eb-256f8afd47cc started at <nil> (0+0 container statuses recorded) May 17 14:16:26.547: INFO: concurrent-27546615-rw8md started at 2022-05-17 14:15:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.547: INFO: Container c ready: true, restart count 0 May 17 14:16:26.547: INFO: hostexec-kind-worker-kbzwd started at <nil> (0+0 container statuses recorded) May 17 14:16:26.547: INFO: service-headless-tgqlh started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.547: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:26.547: INFO: pod-qos-class-5c206c25-c993-4e12-9d89-06931c05bb47 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.547: INFO: service-headless-q5g89 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.547: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:26.547: INFO: service-headless-toggled-9zh79 started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.547: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:26.547: INFO: service-headless-toggled-nw4rn started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.547: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:26.547: INFO: inline-volume-tester-h6pnw started at <nil> (0+0 container statuses recorded) May 17 14:16:26.547: INFO: frontend-685fc574d5-64jqm started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.547: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:26.547: INFO: pod-terminate-status-0-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.547: INFO: kube-proxy-dr9f5 started at 2022-05-17 14:13:23 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.547: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:26.547: INFO: hostexec-kind-worker-28fdh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.547: INFO: Container agnhost-container ready: false, restart count 0 May 17 14:16:26.547: INFO: pod-client started at 2022-05-17 14:15:09 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.547: INFO: Container pod-client ready: true, restart count 0 May 17 14:16:26.547: INFO: liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931 started at 2022-05-17 14:14:13 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.547: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.547: INFO: concurrent-27546616-5t86w started at <nil> (0+0 container statuses recorded) May 17 14:16:26.548: INFO: hostexec-kind-worker-gzkkg started at 2022-05-17 14:15:51 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.548: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.548: INFO: csi-mockplugin-0 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.548: INFO: implicit-nonroot-uid started at <nil> (0+0 container statuses recorded) May 17 14:16:26.548: INFO: ss-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.548: INFO: kindnet-56p79 started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.548: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:28.824: INFO: Latency metrics for node kind-worker May 17 14:16:28.824: INFO: Logging node info for node kind-worker2 May 17 14:16:28.865: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 d009e97f-d73b-4ed0-8afd-9f776bcb704d 6998 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2591":"kind-worker2","csi-mock-csi-mock-volumes-8289":"csi-mock-csi-mock-volumes-8289"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-17 14:15:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:521433a178fd4494a7881394e8d644cf,SystemUUID:c86f292f-55cb-49c2-9dda-efb91a06dab9,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:28.865: INFO: Logging kubelet events for node kind-worker2 May 17 14:16:28.956: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 17 14:16:29.099: INFO: service-headless-9mh45 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:29.099: INFO: pod-a2ba308d-816a-406c-bf57-d5840e5d5387 started at 2022-05-17 14:15:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:29.099: INFO: affinity-nodeport-f2r7w started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:29.099: INFO: hostexec-kind-worker2-kwzz9 started at 2022-05-17 14:15:35 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.099: INFO: kindnet-mq8qb started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:29.099: INFO: pod-2d828178-7e03-4331-91b0-2165aa4eeee5 started at <nil> (0+0 container statuses recorded) May 17 14:16:29.099: INFO: csi-mockplugin-0 started at 2022-05-17 14:15:19 +0000 UTC (0+4 container statuses recorded) May 17 14:16:29.099: INFO: Container busybox ready: true, restart count 0 May 17 14:16:29.099: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:29.099: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:29.099: INFO: Container mock ready: true, restart count 0 May 17 14:16:29.099: INFO: hostexec-kind-worker2-k8cx7 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.099: INFO: pod-server-2 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.099: INFO: hostexec-kind-worker2-gh6gn started at 2022-05-17 14:15:19 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.099: INFO: ss-0 started at 2022-05-17 14:15:25 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container webserver ready: true, restart count 0 May 17 14:16:29.099: INFO: pod-23740531-141e-4062-9ddb-69c15ab1b060 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:29.099: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:13:56 +0000 UTC (0+7 container statuses recorded) May 17 14:16:29.099: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:29.099: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:29.099: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:29.099: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:29.099: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:29.099: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:29.099: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:29.099: INFO: service-proxy-toggled-wrl7r started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:29.099: INFO: ss2-0 started at 2022-05-17 14:16:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container webserver ready: false, restart count 0 May 17 14:16:29.099: INFO: rs-vkdrm started at 2022-05-17 14:14:49 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container donothing ready: false, restart count 0 May 17 14:16:29.099: INFO: pvc-volume-tester-wh448 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container volume-tester ready: true, restart count 0 May 17 14:16:29.099: INFO: service-proxy-disabled-xzkc8 started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:29.099: INFO: frontend-685fc574d5-lc6wl started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:29.099: INFO: frontend-685fc574d5-mj2wn started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:29.099: INFO: service-proxy-disabled-brdhb started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:29.099: INFO: csi-mockplugin-attacher-0 started at 2022-05-17 14:13:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:29.099: INFO: agnhost-replica-6bcf79b489-xd4h9 started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container replica ready: true, restart count 0 May 17 14:16:29.099: INFO: pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8 started at 2022-05-17 14:16:03 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:29.099: INFO: kube-proxy-sdmmb started at 2022-05-17 14:13:29 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:29.099: INFO: inline-volume-tester-tx6st started at 2022-05-17 14:13:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container csi-volume-tester ready: true, restart count 0 May 17 14:16:29.099: INFO: csi-mockplugin-0 started at 2022-05-17 14:13:58 +0000 UTC (0+3 container statuses recorded) May 17 14:16:29.099: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:29.099: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:29.099: INFO: Container mock ready: true, restart count 0 May 17 14:16:29.099: INFO: service-headless-toggled-dvx6b started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:29.099: INFO: service-proxy-toggled-djh27 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.099: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:30.659: INFO: Latency metrics for node kind-worker2 May 17 14:16:30.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-webhook-5115" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sCronJob\sshould\sbe\sable\sto\sschedule\safter\smore\sthan\s100\smissed\sschedule$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:189 May 17 14:16:22.804: Failed to wait for active jobs in CronJob concurrent in namespace cronjob-5522 Unexpected error: <*errors.StatusError | 0xc000dae140>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:202from junit_21.xml
[BeforeEach] [sig-apps] CronJob /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 17 14:16:04.330: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to schedule after more than 100 missed schedule /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:189 �[1mSTEP�[0m: Creating a cronjob �[1mSTEP�[0m: Ensuring one job is running May 17 14:16:22.804: FAIL: Failed to wait for active jobs in CronJob concurrent in namespace cronjob-5522 Unexpected error: <*errors.StatusError | 0xc000dae140>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func1.5() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:202 +0x4b1 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000109080) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000109080) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000109080, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [sig-apps] CronJob /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "cronjob-5522". �[1mSTEP�[0m: Found 0 events. May 17 14:16:25.659: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:16:25.659: INFO: May 17 14:16:25.736: INFO: Logging node info for node kind-control-plane May 17 14:16:25.795: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 1fc1c875-88c2-4f79-b48f-f6ef32b124da 586 0 2022-05-17 14:12:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-17 14:12:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-17 14:12:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:13:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:97599c4ddadc4d3eae566d304d7d104d,SystemUUID:e786229d-3adc-40a9-a01d-f7a176ae9288,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:25.795: INFO: Logging kubelet events for node kind-control-plane May 17 14:16:25.864: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 17 14:16:25.981: INFO: kube-proxy-5tzjs started at 2022-05-17 14:13:26 +0000 UTC (0+1 container statuses recorded) May 17 14:16:25.981: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:25.981: INFO: coredns-78fcd69978-b6zn2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:25.981: INFO: Container coredns ready: true, restart count 0 May 17 14:16:25.981: INFO: local-path-provisioner-6c9449b9dd-bllk2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:25.981: INFO: Container local-path-provisioner ready: true, restart count 0 May 17 14:16:25.981: INFO: coredns-78fcd69978-r97ls started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:25.981: INFO: Container coredns ready: true, restart count 0 May 17 14:16:25.981: INFO: kube-scheduler-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:25.981: INFO: Container kube-scheduler ready: false, restart count 0 May 17 14:16:25.981: INFO: kindnet-s6wb7 started at 2022-05-17 14:13:07 +0000 UTC (0+1 container statuses recorded) May 17 14:16:25.981: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:25.981: INFO: etcd-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:25.981: INFO: Container etcd ready: true, restart count 0 May 17 14:16:25.981: INFO: kube-apiserver-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:25.981: INFO: Container kube-apiserver ready: true, restart count 0 May 17 14:16:25.981: INFO: kube-controller-manager-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:25.981: INFO: Container kube-controller-manager ready: false, restart count 0 May 17 14:16:26.224: INFO: Latency metrics for node kind-control-plane May 17 14:16:26.225: INFO: Logging node info for node kind-worker May 17 14:16:26.235: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5ba69ac9-ccd5-447d-9f27-9bee10b23121 6999 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9914":"kind-worker","csi-mock-csi-mock-volumes-7782":"csi-mock-csi-mock-volumes-7782"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-05-17 14:15:50 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-17 14:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7e8057f975041deb201a063c162c158,SystemUUID:fc207ccb-2e60-4380-b38c-671b606f274a,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155,DevicePath:,},},Config:nil,},} May 17 14:16:26.235: INFO: Logging kubelet events for node kind-worker May 17 14:16:26.301: INFO: Logging pods the kubelet thinks is on node kind-worker May 17 14:16:26.384: INFO: hostexec-kind-worker-gzkkg started at 2022-05-17 14:15:51 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.384: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.384: INFO: hostexec-kind-worker-28fdh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.384: INFO: Container agnhost-container ready: false, restart count 0 May 17 14:16:26.384: INFO: pod-client started at 2022-05-17 14:15:09 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.384: INFO: Container pod-client ready: true, restart count 0 May 17 14:16:26.384: INFO: liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931 started at 2022-05-17 14:14:13 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.384: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.384: INFO: concurrent-27546616-5t86w started at <nil> (0+0 container statuses recorded) May 17 14:16:26.384: INFO: kindnet-56p79 started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.384: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:26.384: INFO: csi-mockplugin-0 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.384: INFO: implicit-nonroot-uid started at <nil> (0+0 container statuses recorded) May 17 14:16:26.384: INFO: ss-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.384: INFO: affinity-nodeport-xj24q started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.384: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:26.384: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:15:44 +0000 UTC (0+7 container statuses recorded) May 17 14:16:26.384: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:26.384: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:26.384: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:26.384: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:26.384: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:26.384: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:26.384: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:26.384: INFO: deployment-55649fd747-6dbnz started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.384: INFO: Container nginx ready: false, restart count 0 May 17 14:16:26.384: INFO: agnhost-primary-5db8ddd565-ktltd started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.384: INFO: Container primary ready: true, restart count 0 May 17 14:16:26.384: INFO: affinity-nodeport-skwpx started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.384: INFO: Container affinity-nodeport ready: false, restart count 0 May 17 14:16:26.384: INFO: send-events-4adc3414-703f-4b62-a42c-926d4b4b6fc2 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.384: INFO: termination-message-container5de5961b-b8bb-4868-adb0-b23802f044ae started at <nil> (0+0 container statuses recorded) May 17 14:16:26.384: INFO: service-proxy-toggled-mlq7q started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.384: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:26.384: INFO: hostexec-kind-worker-6t6pq started at 2022-05-17 14:15:52 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.384: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.384: INFO: agnhost-replica-6bcf79b489-885sh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.384: INFO: Container replica ready: false, restart count 0 May 17 14:16:26.384: INFO: hostexec-kind-worker-kbzwd started at <nil> (0+0 container statuses recorded) May 17 14:16:26.384: INFO: service-headless-tgqlh started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.384: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:26.384: INFO: deployment-55649fd747-f5bss started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.384: INFO: Container nginx ready: false, restart count 0 May 17 14:16:26.384: INFO: pod-terminate-status-2-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.384: INFO: service-proxy-disabled-qk4md started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.384: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:26.384: INFO: busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 started at 2022-05-17 14:15:24 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.384: INFO: Container busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 ready: true, restart count 0 May 17 14:16:26.384: INFO: pod-b8564c48-8aa2-4bdb-a5eb-256f8afd47cc started at <nil> (0+0 container statuses recorded) May 17 14:16:26.384: INFO: concurrent-27546615-rw8md started at 2022-05-17 14:15:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.384: INFO: Container c ready: true, restart count 0 May 17 14:16:26.384: INFO: pod-qos-class-5c206c25-c993-4e12-9d89-06931c05bb47 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.384: INFO: frontend-685fc574d5-64jqm started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.384: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:26.384: INFO: service-headless-q5g89 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.385: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:26.385: INFO: service-headless-toggled-9zh79 started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.385: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:26.385: INFO: service-headless-toggled-nw4rn started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.385: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:26.385: INFO: inline-volume-tester-h6pnw started at <nil> (0+0 container statuses recorded) May 17 14:16:26.385: INFO: kube-proxy-dr9f5 started at 2022-05-17 14:13:23 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.385: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:26.385: INFO: pod-terminate-status-0-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:27.777: INFO: Latency metrics for node kind-worker May 17 14:16:27.777: INFO: Logging node info for node kind-worker2 May 17 14:16:27.812: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 d009e97f-d73b-4ed0-8afd-9f776bcb704d 6998 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2591":"kind-worker2","csi-mock-csi-mock-volumes-8289":"csi-mock-csi-mock-volumes-8289"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-17 14:15:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:521433a178fd4494a7881394e8d644cf,SystemUUID:c86f292f-55cb-49c2-9dda-efb91a06dab9,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:27.813: INFO: Logging kubelet events for node kind-worker2 May 17 14:16:27.849: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 17 14:16:27.938: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:13:56 +0000 UTC (0+7 container statuses recorded) May 17 14:16:27.938: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:27.938: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:27.938: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:27.938: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:27.938: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:27.938: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:27.938: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:27.938: INFO: ss-0 started at 2022-05-17 14:15:25 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container webserver ready: true, restart count 0 May 17 14:16:27.938: INFO: pod-23740531-141e-4062-9ddb-69c15ab1b060 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:27.938: INFO: pvc-volume-tester-wh448 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container volume-tester ready: true, restart count 0 May 17 14:16:27.938: INFO: service-proxy-toggled-wrl7r started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:27.938: INFO: ss2-0 started at 2022-05-17 14:16:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container webserver ready: false, restart count 0 May 17 14:16:27.938: INFO: rs-vkdrm started at 2022-05-17 14:14:49 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container donothing ready: false, restart count 0 May 17 14:16:27.938: INFO: service-proxy-disabled-brdhb started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:27.938: INFO: pod-e8401689-4cef-4e20-9f2e-264844f9d704 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:27.938: INFO: service-proxy-disabled-xzkc8 started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:27.938: INFO: frontend-685fc574d5-lc6wl started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:27.938: INFO: frontend-685fc574d5-mj2wn started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:27.938: INFO: csi-mockplugin-attacher-0 started at 2022-05-17 14:13:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:27.938: INFO: kube-proxy-sdmmb started at 2022-05-17 14:13:29 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:27.938: INFO: agnhost-replica-6bcf79b489-xd4h9 started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container replica ready: true, restart count 0 May 17 14:16:27.938: INFO: pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8 started at 2022-05-17 14:16:03 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:27.938: INFO: inline-volume-tester-tx6st started at 2022-05-17 14:13:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container csi-volume-tester ready: true, restart count 0 May 17 14:16:27.938: INFO: csi-mockplugin-0 started at 2022-05-17 14:13:58 +0000 UTC (0+3 container statuses recorded) May 17 14:16:27.938: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:27.938: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:27.938: INFO: Container mock ready: true, restart count 0 May 17 14:16:27.938: INFO: service-headless-toggled-dvx6b started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:27.938: INFO: deployment-55649fd747-jmrdv started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container nginx ready: false, restart count 0 May 17 14:16:27.938: INFO: service-proxy-toggled-djh27 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:27.938: INFO: kindnet-mq8qb started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:27.938: INFO: service-headless-9mh45 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:27.938: INFO: pod-a2ba308d-816a-406c-bf57-d5840e5d5387 started at 2022-05-17 14:15:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:27.938: INFO: affinity-nodeport-f2r7w started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:27.938: INFO: hostexec-kind-worker2-kwzz9 started at 2022-05-17 14:15:35 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:27.938: INFO: hostexec-kind-worker2-gh6gn started at 2022-05-17 14:15:19 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.938: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:27.938: INFO: pod-2d828178-7e03-4331-91b0-2165aa4eeee5 started at <nil> (0+0 container statuses recorded) May 17 14:16:27.939: INFO: csi-mockplugin-0 started at 2022-05-17 14:15:19 +0000 UTC (0+4 container statuses recorded) May 17 14:16:27.939: INFO: Container busybox ready: true, restart count 0 May 17 14:16:27.939: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:27.939: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:27.939: INFO: Container mock ready: true, restart count 0 May 17 14:16:27.939: INFO: hostexec-kind-worker2-k8cx7 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.939: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:27.939: INFO: pod-server-2 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.939: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:30.640: INFO: Latency metrics for node kind-worker2 May 17 14:16:30.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-5522" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\simplement\slegacy\sreplacement\swhen\sthe\supdate\sstrategy\sis\sOnDelete$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:503 May 17 14:16:22.817: Unexpected error: <*errors.StatusError | 0xc00049e500>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:68from junit_24.xml
[BeforeEach] [sig-apps] StatefulSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 17 14:14:30.576: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 �[1mSTEP�[0m: Creating service test in namespace statefulset-3114 [It] should implement legacy replacement when the update strategy is OnDelete /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:503 �[1mSTEP�[0m: Creating a new StatefulSet May 17 14:14:30.764: INFO: Found 0 stateful pods, waiting for 3 May 17 14:14:40.771: INFO: Found 2 stateful pods, waiting for 3 May 17 14:14:50.815: INFO: Found 2 stateful pods, waiting for 3 May 17 14:15:00.779: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 17 14:15:00.779: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 17 14:15:00.779: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 17 14:15:10.768: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 17 14:15:10.768: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 17 14:15:10.769: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false May 17 14:15:20.781: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 17 14:15:20.781: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 17 14:15:20.781: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Restoring Pods to the current revision May 17 14:15:21.016: INFO: Found 1 stateful pods, waiting for 3 May 17 14:15:31.022: INFO: Found 1 stateful pods, waiting for 3 May 17 14:15:41.026: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 17 14:15:41.026: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 17 14:15:41.026: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 17 14:15:51.105: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 17 14:15:51.105: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 17 14:15:51.105: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 May 17 14:15:51.184: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Recreating Pods at the new revision May 17 14:16:01.550: INFO: Found 1 stateful pods, waiting for 3 May 17 14:16:22.817: FAIL: Unexpected error: <*errors.StatusError | 0xc00049e500>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList(0x79bc3e8, 0xc0020ce2c0, 0xc00409e500, 0x18) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:68 +0x21e k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1(0xc000f32c00, 0xc000f32c00, 0xc000f32c00) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:37 +0x67 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x79286a8, 0xc000056098, 0xc00285ee54, 0x1, 0x2) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x79286a8, 0xc000056098, 0xc0036563b0, 0x0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext(0x79286a8, 0xc000056098, 0xc00422aa20, 0xc0036563b0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:657 +0x159 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x79286a8, 0xc000056098, 0xc00422aa01, 0xc00422aa20, 0xc0036563b0, 0x686a460, 0xc0036563b0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:591 +0xa5 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x79286a8, 0xc000056098, 0x2540be400, 0x8bb2c97000, 0xc0036563b0, 0x6beee60, 0x1) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2540be400, 0x8bb2c97000, 0xc003663ec0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning(0x79bc3e8, 0xc0020ce2c0, 0x300000003, 0xc00409e500) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:35 +0x9d k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func9.2.9() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:559 +0x1311 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000830600) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000830600) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000830600, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 E0517 14:16:22.818374 86703 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"May 17 14:16:22.817: Unexpected error:\n <*errors.StatusError | 0xc00049e500>: {\n ErrStatus: {\n TypeMeta: {Kind: \"\", APIVersion: \"\"},\n ListMeta: {\n SelfLink: \"\",\n ResourceVersion: \"\",\n Continue: \"\",\n RemainingItemCount: nil,\n },\n Status: \"Failure\",\n Message: \"etcdserver: request timed out\",\n Reason: \"\",\n Details: nil,\n Code: 500,\n },\n }\n etcdserver: request timed out\noccurred", Filename:"/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go", Line:68, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList(0x79bc3e8, 0xc0020ce2c0, 0xc00409e500, 0x18)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:68 +0x21e\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1(0xc000f32c00, 0xc000f32c00, 0xc000f32c00)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:37 +0x67\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x79286a8, 0xc000056098, 0xc00285ee54, 0x1, 0x2)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x79286a8, 0xc000056098, 0xc0036563b0, 0x0, 0x0, 0x0)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext(0x79286a8, 0xc000056098, 0xc00422aa20, 0xc0036563b0, 0x0, 0x0)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:657 +0x159\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x79286a8, 0xc000056098, 0xc00422aa01, 0xc00422aa20, 0xc0036563b0, 0x686a460, 0xc0036563b0)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:591 +0xa5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x79286a8, 0xc000056098, 0x2540be400, 0x8bb2c97000, 0xc0036563b0, 0x6beee60, 0x1)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2540be400, 0x8bb2c97000, 0xc003663ec0, 0x0, 0x0)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning(0x79bc3e8, 0xc0020ce2c0, 0x300000003, 0xc00409e500)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:35 +0x9d\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:80\nk8s.io/kubernetes/test/e2e/apps.glob..func9.2.9()\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:559 +0x1311\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000830600)\n\t_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000830600)\n\t_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b\ntesting.tRunner(0xc000830600, 0x72e36d8)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 120 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6c03da0, 0xc002fa2740) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86 panic(0x6c03da0, 0xc002fa2740) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc001d46480, 0x224, 0x8985320, 0x71, 0x44, 0xc000f18c80, 0xc61) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5 panic(0x6330600, 0x77df3e0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc001d46480, 0x224, 0xc00285e388, 0x1, 0x1) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc001d46480, 0x224, 0xc00285e470, 0x1, 0x1) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5 k8s.io/kubernetes/test/e2e/framework.Fail(0xc001d46240, 0x20f, 0xc0026d67a0, 0x1, 0x1) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:62 +0x1ea k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc00285e5f8, 0x7914db8, 0xa0e1f88, 0x0, 0x0, 0x0, 0x0, 0xc00049e500) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:79 +0x216 k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc00285e5f8, 0x7914db8, 0xa0e1f88, 0x0, 0x0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x78b1d40, 0xc00049e500, 0x0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:46 +0xe7 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:40 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList(0x79bc3e8, 0xc0020ce2c0, 0xc00409e500, 0x18) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:68 +0x21e k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1(0xc000f32c00, 0xc000f32c00, 0xc000f32c00) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:37 +0x67 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x79286a8, 0xc000056098, 0xc00285ee54, 0x1, 0x2) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x79286a8, 0xc000056098, 0xc0036563b0, 0x0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext(0x79286a8, 0xc000056098, 0xc00422aa20, 0xc0036563b0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:657 +0x159 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x79286a8, 0xc000056098, 0xc00422aa01, 0xc00422aa20, 0xc0036563b0, 0x686a460, 0xc0036563b0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:591 +0xa5 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x79286a8, 0xc000056098, 0x2540be400, 0x8bb2c97000, 0xc0036563b0, 0x6beee60, 0x1) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2540be400, 0x8bb2c97000, 0xc003663ec0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning(0x79bc3e8, 0xc0020ce2c0, 0x300000003, 0xc00409e500) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:35 +0x9d k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func9.2.9() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:559 +0x1311 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00050daa0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc00050daa0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc00028cbc0, 0x78adcc0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc003dcda40, 0x0, 0x78adcc0, 0xc000070880) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc003dcda40, 0x78adcc0, 0xc000070880) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc000948140, 0xc003dcda40, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc000948140, 0x1) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc000948140, 0xc0045c3230) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001a0070, 0x7fec86dfe140, 0xc000830600, 0x70ae1f5, 0x14, 0xc000d14060, 0x3, 0x3, 0x79634d8, 0xc000070880, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:79 +0x546 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x78b37c0, 0xc000830600, 0x70ae1f5, 0x14, 0xc000d18000, 0x3, 0x4, 0x4) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:219 +0x218 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x78b37c0, 0xc000830600, 0x70ae1f5, 0x14, 0xc000d02000, 0x2, 0x2, 0x25) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:207 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000830600) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000830600) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000830600, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 May 17 14:16:25.591: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://127.0.0.1:44135 --kubeconfig=/root/.kube/kind-test-config --namespace=statefulset-3114 describe po ss2-0' May 17 14:16:25.975: INFO: stderr: "" May 17 14:16:25.975: INFO: stdout: "Name: ss2-0\nNamespace: statefulset-3114\nPriority: 0\nNode: kind-worker2/\nLabels: baz=blah\n controller-revision-hash=ss2-5bbbc9fc94\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-0\nAnnotations: <none>\nStatus: Pending\nIP: \nIPs: <none>\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.39-1\n Port: <none>\n Host Port: <none>\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cqkdp (ro)\nConditions:\n Type Status\n PodScheduled True \nVolumes:\n kube-api-access-cqkdp:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 24s default-scheduler Successfully assigned statefulset-3114/ss2-0 to kind-worker2\n Normal Pulling 23s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-1\"\n" May 17 14:16:25.975: INFO: Output of kubectl describe ss2-0: Name: ss2-0 Namespace: statefulset-3114 Priority: 0 Node: kind-worker2/ Labels: baz=blah controller-revision-hash=ss2-5bbbc9fc94 foo=bar statefulset.kubernetes.io/pod-name=ss2-0 Annotations: <none> Status: Pending IP: IPs: <none> Controlled By: StatefulSet/ss2 Containers: webserver: Image: k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Port: <none> Host Port: <none> Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cqkdp (ro) Conditions: Type Status PodScheduled True Volumes: kube-api-access-cqkdp: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 24s default-scheduler Successfully assigned statefulset-3114/ss2-0 to kind-worker2 Normal Pulling 23s kubelet Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.39-1" May 17 14:16:25.975: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://127.0.0.1:44135 --kubeconfig=/root/.kube/kind-test-config --namespace=statefulset-3114 logs ss2-0 --tail=100' May 17 14:16:26.305: INFO: rc: 1 May 17 14:16:26.305: INFO: Last 100 log lines of ss2-0: May 17 14:16:26.305: INFO: Deleting all statefulset in ns statefulset-3114 May 17 14:16:26.370: INFO: Scaling statefulset ss2 to 0 May 17 14:17:06.717: INFO: Waiting for statefulset status.replicas updated to 0 May 17 14:17:06.761: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "statefulset-3114". �[1mSTEP�[0m: Found 43 events. May 17 14:17:07.009: INFO: At 2022-05-17 14:14:30 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-0 in StatefulSet ss2 successful May 17 14:17:07.009: INFO: At 2022-05-17 14:14:30 +0000 UTC - event for ss2-0: {default-scheduler } Scheduled: Successfully assigned statefulset-3114/ss2-0 to kind-worker May 17 14:17:07.009: INFO: At 2022-05-17 14:14:32 +0000 UTC - event for ss2-0: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine May 17 14:17:07.009: INFO: At 2022-05-17 14:14:32 +0000 UTC - event for ss2-0: {kubelet kind-worker} Created: Created container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:14:33 +0000 UTC - event for ss2-0: {kubelet kind-worker} Started: Started container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:14:40 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-1 in StatefulSet ss2 successful May 17 14:17:07.009: INFO: At 2022-05-17 14:14:40 +0000 UTC - event for ss2-1: {default-scheduler } Scheduled: Successfully assigned statefulset-3114/ss2-1 to kind-worker2 May 17 14:17:07.009: INFO: At 2022-05-17 14:14:42 +0000 UTC - event for ss2-1: {kubelet kind-worker2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" May 17 14:17:07.009: INFO: At 2022-05-17 14:14:50 +0000 UTC - event for ss2-1: {kubelet kind-worker2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 8.255844197s May 17 14:17:07.009: INFO: At 2022-05-17 14:14:51 +0000 UTC - event for ss2-1: {kubelet kind-worker2} Created: Created container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:14:51 +0000 UTC - event for ss2-1: {kubelet kind-worker2} Started: Started container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:14:56 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-2 in StatefulSet ss2 successful May 17 14:17:07.009: INFO: At 2022-05-17 14:14:56 +0000 UTC - event for ss2-2: {default-scheduler } Scheduled: Successfully assigned statefulset-3114/ss2-2 to kind-worker May 17 14:17:07.009: INFO: At 2022-05-17 14:14:57 +0000 UTC - event for ss2-2: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine May 17 14:17:07.009: INFO: At 2022-05-17 14:14:58 +0000 UTC - event for ss2-2: {kubelet kind-worker} Created: Created container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:14:58 +0000 UTC - event for ss2-2: {kubelet kind-worker} Started: Started container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:15:20 +0000 UTC - event for ss2-0: {kubelet kind-worker} Killing: Stopping container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:15:20 +0000 UTC - event for ss2-0: {default-scheduler } Scheduled: Successfully assigned statefulset-3114/ss2-0 to kind-worker2 May 17 14:17:07.009: INFO: At 2022-05-17 14:15:20 +0000 UTC - event for ss2-0: {kubelet kind-worker} Unhealthy: Readiness probe failed: Get "http://10.244.1.33:80/index.html": read tcp 10.244.1.1:51982->10.244.1.33:80: read: connection reset by peer May 17 14:17:07.009: INFO: At 2022-05-17 14:15:20 +0000 UTC - event for ss2-1: {kubelet kind-worker2} Killing: Stopping container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:15:20 +0000 UTC - event for ss2-2: {kubelet kind-worker} Killing: Stopping container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:15:23 +0000 UTC - event for ss2-0: {kubelet kind-worker2} Created: Created container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:15:23 +0000 UTC - event for ss2-0: {kubelet kind-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine May 17 14:17:07.009: INFO: At 2022-05-17 14:15:23 +0000 UTC - event for ss2-0: {kubelet kind-worker2} Started: Started container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:15:32 +0000 UTC - event for ss2-1: {default-scheduler } Scheduled: Successfully assigned statefulset-3114/ss2-1 to kind-worker May 17 14:17:07.009: INFO: At 2022-05-17 14:15:33 +0000 UTC - event for ss2-1: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine May 17 14:17:07.009: INFO: At 2022-05-17 14:15:34 +0000 UTC - event for ss2-1: {kubelet kind-worker} Started: Started container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:15:34 +0000 UTC - event for ss2-1: {kubelet kind-worker} Created: Created container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:15:40 +0000 UTC - event for ss2-2: {default-scheduler } Scheduled: Successfully assigned statefulset-3114/ss2-2 to kind-worker May 17 14:17:07.009: INFO: At 2022-05-17 14:15:41 +0000 UTC - event for ss2-2: {kubelet kind-worker} Created: Created container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:15:41 +0000 UTC - event for ss2-2: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine May 17 14:17:07.009: INFO: At 2022-05-17 14:15:42 +0000 UTC - event for ss2-2: {kubelet kind-worker} Started: Started container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:16:01 +0000 UTC - event for ss2-0: {default-scheduler } Scheduled: Successfully assigned statefulset-3114/ss2-0 to kind-worker2 May 17 14:17:07.009: INFO: At 2022-05-17 14:16:01 +0000 UTC - event for ss2-0: {kubelet kind-worker2} Killing: Stopping container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:16:01 +0000 UTC - event for ss2-1: {kubelet kind-worker} Killing: Stopping container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:16:01 +0000 UTC - event for ss2-2: {kubelet kind-worker} Killing: Stopping container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:16:02 +0000 UTC - event for ss2-0: {kubelet kind-worker2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.39-1" May 17 14:17:07.009: INFO: At 2022-05-17 14:16:27 +0000 UTC - event for ss2-0: {kubelet kind-worker2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.39-1" in 24.868188816s May 17 14:17:07.009: INFO: At 2022-05-17 14:16:27 +0000 UTC - event for ss2-0: {kubelet kind-worker2} Created: Created container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:16:28 +0000 UTC - event for ss2-0: {kubelet kind-worker2} Started: Started container webserver May 17 14:17:07.009: INFO: At 2022-05-17 14:16:59 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-0 in StatefulSet ss2 successful May 17 14:17:07.009: INFO: At 2022-05-17 14:16:59 +0000 UTC - event for ss2-0: {kubelet kind-worker2} Unhealthy: Readiness probe failed: Get "http://10.244.2.68:80/index.html": dial tcp 10.244.2.68:80: connect: connection refused May 17 14:17:07.009: INFO: At 2022-05-17 14:16:59 +0000 UTC - event for ss2-0: {kubelet kind-worker2} Killing: Stopping container webserver May 17 14:17:07.035: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:17:07.035: INFO: May 17 14:17:07.055: INFO: Logging node info for node kind-control-plane May 17 14:17:07.080: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 1fc1c875-88c2-4f79-b48f-f6ef32b124da 586 0 2022-05-17 14:12:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-17 14:12:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-17 14:12:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:13:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:97599c4ddadc4d3eae566d304d7d104d,SystemUUID:e786229d-3adc-40a9-a01d-f7a176ae9288,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:17:07.080: INFO: Logging kubelet events for node kind-control-plane May 17 14:17:07.170: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 17 14:17:07.209: INFO: kindnet-s6wb7 started at 2022-05-17 14:13:07 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.209: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:17:07.209: INFO: etcd-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.209: INFO: Container etcd ready: true, restart count 0 May 17 14:17:07.209: INFO: kube-apiserver-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.209: INFO: Container kube-apiserver ready: true, restart count 0 May 17 14:17:07.209: INFO: kube-controller-manager-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.209: INFO: Container kube-controller-manager ready: true, restart count 1 May 17 14:17:07.209: INFO: kube-scheduler-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.209: INFO: Container kube-scheduler ready: true, restart count 1 May 17 14:17:07.209: INFO: coredns-78fcd69978-b6zn2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.209: INFO: Container coredns ready: true, restart count 0 May 17 14:17:07.209: INFO: local-path-provisioner-6c9449b9dd-bllk2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.209: INFO: Container local-path-provisioner ready: true, restart count 0 May 17 14:17:07.209: INFO: coredns-78fcd69978-r97ls started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.209: INFO: Container coredns ready: true, restart count 0 May 17 14:17:07.209: INFO: kube-proxy-5tzjs started at 2022-05-17 14:13:26 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.209: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:17:07.367: INFO: Latency metrics for node kind-control-plane May 17 14:17:07.367: INFO: Logging node info for node kind-worker May 17 14:17:07.390: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5ba69ac9-ccd5-447d-9f27-9bee10b23121 7563 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9914":"kind-worker","csi-mock-csi-mock-volumes-7782":"csi-mock-csi-mock-volumes-7782"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-17 14:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:16:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:16:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:16:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:16:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7e8057f975041deb201a063c162c158,SystemUUID:fc207ccb-2e60-4380-b38c-671b606f274a,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:17:07.390: INFO: Logging kubelet events for node kind-worker May 17 14:17:07.446: INFO: Logging pods the kubelet thinks is on node kind-worker May 17 14:17:07.497: INFO: frontend-685fc574d5-64jqm started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.497: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:17:07.497: INFO: alpine-nnp-true-6302bd2f-9cf1-4424-a665-225f9534e94b started at 2022-05-17 14:16:58 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.497: INFO: Container alpine-nnp-true-6302bd2f-9cf1-4424-a665-225f9534e94b ready: false, restart count 0 May 17 14:17:07.497: INFO: service-headless-q5g89 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.497: INFO: Container service-headless ready: true, restart count 0 May 17 14:17:07.497: INFO: hairpin started at 2022-05-17 14:16:59 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.497: INFO: Container agnhost-container ready: false, restart count 0 May 17 14:17:07.497: INFO: service-headless-toggled-9zh79 started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.497: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:17:07.497: INFO: service-headless-toggled-nw4rn started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.497: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:17:07.497: INFO: kube-proxy-dr9f5 started at 2022-05-17 14:13:23 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.497: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:17:07.497: INFO: hostpath-symlink-prep-provisioning-7983 started at 2022-05-17 14:16:58 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.497: INFO: Container init-volume-provisioning-7983 ready: true, restart count 0 May 17 14:17:07.497: INFO: hostexec-kind-worker-gzkkg started at 2022-05-17 14:15:51 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.497: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:17:07.497: INFO: liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931 started at 2022-05-17 14:14:13 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.497: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:17:07.497: INFO: concurrent-27546616-5t86w started at 2022-05-17 14:16:00 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.497: INFO: Container c ready: false, restart count 0 May 17 14:17:07.497: INFO: csi-mockplugin-attacher-0 started at <nil> (0+0 container statuses recorded) May 17 14:17:07.497: INFO: kindnet-56p79 started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.497: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:17:07.497: INFO: csi-mockplugin-0 started at 2022-05-17 14:16:01 +0000 UTC (0+3 container statuses recorded) May 17 14:17:07.497: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:17:07.497: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:17:07.497: INFO: Container mock ready: true, restart count 0 May 17 14:17:07.497: INFO: implicit-nonroot-uid started at 2022-05-17 14:16:05 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.497: INFO: Container implicit-nonroot-uid ready: false, restart count 0 May 17 14:17:07.497: INFO: test-ss-0 started at 2022-05-17 14:16:59 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.497: INFO: Container webserver ready: false, restart count 0 May 17 14:17:07.497: INFO: pod-submit-remove-0135fe98-eab9-46e6-b34f-fb48a8d59d10 started at 2022-05-17 14:16:59 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.497: INFO: Container agnhost-container ready: false, restart count 0 May 17 14:17:07.497: INFO: pod-1d484299-c922-4658-86f6-e8e3909b491c started at 2022-05-17 14:17:00 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.497: INFO: Container write-pod ready: false, restart count 0 May 17 14:17:07.497: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:15:44 +0000 UTC (0+7 container statuses recorded) May 17 14:17:07.497: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:17:07.497: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:17:07.497: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:17:07.497: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:17:07.497: INFO: Container hostpath ready: true, restart count 0 May 17 14:17:07.497: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:17:07.497: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:17:07.497: INFO: service-proxy-toggled-mlq7q started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.498: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:17:07.498: INFO: hostexec-kind-worker-6t6pq started at 2022-05-17 14:15:52 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.498: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:17:07.498: INFO: agnhost-primary-5db8ddd565-ktltd started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.498: INFO: Container primary ready: true, restart count 0 May 17 14:17:07.498: INFO: csi-mockplugin-0 started at <nil> (0+0 container statuses recorded) May 17 14:17:07.498: INFO: agnhost-replica-6bcf79b489-885sh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.498: INFO: Container replica ready: true, restart count 0 May 17 14:17:07.498: INFO: hostexec-kind-worker-kbzwd started at 2022-05-17 14:16:05 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.498: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:17:07.498: INFO: service-headless-tgqlh started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.498: INFO: Container service-headless ready: true, restart count 0 May 17 14:17:07.498: INFO: explicit-nonroot-uid started at 2022-05-17 14:16:59 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.498: INFO: Container explicit-nonroot-uid ready: false, restart count 0 May 17 14:17:07.498: INFO: service-proxy-disabled-qk4md started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.498: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:17:07.498: INFO: pod-qos-class-5c206c25-c993-4e12-9d89-06931c05bb47 started at 2022-05-17 14:16:07 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.498: INFO: Container agnhost ready: false, restart count 0 May 17 14:17:07.849: INFO: Latency metrics for node kind-worker May 17 14:17:07.849: INFO: Logging node info for node kind-worker2 May 17 14:17:07.880: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 d009e97f-d73b-4ed0-8afd-9f776bcb704d 7361 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2591":"kind-worker2","csi-mock-csi-mock-volumes-8289":"csi-mock-csi-mock-volumes-8289"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-17 14:15:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:16:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:16:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:16:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:16:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:521433a178fd4494a7881394e8d644cf,SystemUUID:c86f292f-55cb-49c2-9dda-efb91a06dab9,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:17:07.881: INFO: Logging kubelet events for node kind-worker2 May 17 14:17:07.929: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 17 14:17:07.994: INFO: sample-webhook-deployment-78988fc6cd-vk2pb started at 2022-05-17 14:17:01 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container sample-webhook ready: false, restart count 0 May 17 14:17:07.994: INFO: service-proxy-toggled-wrl7r started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:17:07.994: INFO: pod-projected-configmaps-00d06ba1-8f76-4414-937f-0e4862892c3e started at <nil> (0+0 container statuses recorded) May 17 14:17:07.994: INFO: service-proxy-disabled-brdhb started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:17:07.994: INFO: tester started at <nil> (0+0 container statuses recorded) May 17 14:17:07.994: INFO: service-proxy-disabled-xzkc8 started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:17:07.994: INFO: frontend-685fc574d5-lc6wl started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container guestbook-frontend ready: false, restart count 0 May 17 14:17:07.994: INFO: frontend-685fc574d5-mj2wn started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:17:07.994: INFO: kube-proxy-sdmmb started at 2022-05-17 14:13:29 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:17:07.994: INFO: verify-service-down-host-exec-pod started at <nil> (0+0 container statuses recorded) May 17 14:17:07.994: INFO: hostexec-kind-worker2-7bqcw started at <nil> (0+0 container statuses recorded) May 17 14:17:07.994: INFO: pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8 started at 2022-05-17 14:16:03 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container write-pod ready: true, restart count 0 May 17 14:17:07.994: INFO: pod-subpath-test-inlinevolume-lfq5 started at 2022-05-17 14:16:58 +0000 UTC (2+1 container statuses recorded) May 17 14:17:07.994: INFO: Init container init-volume-inlinevolume-lfq5 ready: true, restart count 0 May 17 14:17:07.994: INFO: Init container test-init-volume-inlinevolume-lfq5 ready: false, restart count 0 May 17 14:17:07.994: INFO: Container test-container-subpath-inlinevolume-lfq5 ready: false, restart count 0 May 17 14:17:07.994: INFO: server started at 2022-05-17 14:16:58 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:17:07.994: INFO: service-proxy-toggled-djh27 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:17:07.994: INFO: pod-secrets-a8a6d006-fb93-4771-9eef-47ae406f935d started at 2022-05-17 14:16:59 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container secret-volume-test ready: false, restart count 0 May 17 14:17:07.994: INFO: service-headless-toggled-dvx6b started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:17:07.994: INFO: kindnet-mq8qb started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:17:07.994: INFO: service-headless-9mh45 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container service-headless ready: true, restart count 0 May 17 14:17:07.994: INFO: test-webserver-4226f15c-cd18-4e90-bf31-e079c9669de0 started at 2022-05-17 14:16:59 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container test-webserver ready: false, restart count 0 May 17 14:17:07.994: INFO: hostexec-kind-worker2-gh6gn started at 2022-05-17 14:15:19 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:17:07.994: INFO: pod-2d828178-7e03-4331-91b0-2165aa4eeee5 started at 2022-05-17 14:16:06 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container write-pod ready: false, restart count 2 May 17 14:17:07.994: INFO: csi-mockplugin-0 started at 2022-05-17 14:15:19 +0000 UTC (0+4 container statuses recorded) May 17 14:17:07.994: INFO: Container busybox ready: true, restart count 0 May 17 14:17:07.994: INFO: Container csi-provisioner ready: false, restart count 1 May 17 14:17:07.994: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:17:07.994: INFO: Container mock ready: true, restart count 0 May 17 14:17:07.994: INFO: csi-mockplugin-0 started at <nil> (0+0 container statuses recorded) May 17 14:17:07.994: INFO: hostexec-kind-worker2-k8cx7 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:17:07.994: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:13:56 +0000 UTC (0+7 container statuses recorded) May 17 14:17:07.994: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:17:07.994: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:17:07.994: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:17:07.994: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:17:07.994: INFO: Container hostpath ready: true, restart count 0 May 17 14:17:07.994: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:17:07.994: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:17:07.994: INFO: ss-0 started at 2022-05-17 14:15:25 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container webserver ready: true, restart count 0 May 17 14:17:07.994: INFO: pod-23740531-141e-4062-9ddb-69c15ab1b060 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:17:07.994: INFO: Container write-pod ready: true, restart count 0 May 17 14:17:08.350: INFO: Latency metrics for node kind-worker2 May 17 14:17:08.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-3114" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sprovide\sbasic\sidentity$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:128 May 17 14:16:22.821: Unexpected error: <*errors.StatusError | 0xc0007a2140>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:68from junit_06.xml
[BeforeEach] [sig-apps] StatefulSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 17 14:15:09.850: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 �[1mSTEP�[0m: Creating service test in namespace statefulset-8678 [It] should provide basic identity /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:128 �[1mSTEP�[0m: Creating statefulset ss in namespace statefulset-8678 May 17 14:15:09.962: INFO: Default storage class: "standard" �[1mSTEP�[0m: Saturating stateful set ss May 17 14:15:09.983: INFO: Waiting for stateful pod at index 0 to enter Running May 17 14:15:10.025: INFO: Found 0 stateful pods, waiting for 1 May 17 14:15:20.057: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false May 17 14:15:30.029: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false May 17 14:15:40.029: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 17 14:15:40.029: INFO: Resuming stateful pod at index 0 May 17 14:15:40.032: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://127.0.0.1:44135 --kubeconfig=/root/.kube/kind-test-config --namespace=statefulset-8678 exec ss-0 -- /bin/sh -x -c dd if=/dev/zero of=/data/statefulset-continue bs=1 count=1 conv=fsync' May 17 14:15:40.357: INFO: stderr: "+ dd 'if=/dev/zero' 'of=/data/statefulset-continue' 'bs=1' 'count=1' 'conv=fsync'\n1+0 records in\n1+0 records out\n" May 17 14:15:40.357: INFO: stdout: "" May 17 14:15:40.357: INFO: Resumed pod ss-0 May 17 14:15:40.357: INFO: Waiting for stateful pod at index 1 to enter Running May 17 14:15:40.386: INFO: Found 1 stateful pods, waiting for 2 May 17 14:15:50.400: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 17 14:15:50.400: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Pending - Ready=false May 17 14:16:00.400: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 17 14:16:00.400: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Pending - Ready=false May 17 14:16:22.821: FAIL: Unexpected error: <*errors.StatusError | 0xc0007a2140>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList(0x79bc3e8, 0xc001ed4580, 0xc000442a00, 0x42) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:68 +0x21e k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1(0xc001f8b7a0, 0xc001f8b7a0, 0xc001f8b7a0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:37 +0x67 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x79286a8, 0xc000056098, 0xc001242e8c, 0x1, 0x2) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x79286a8, 0xc000056098, 0xc002bb8d20, 0x0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext(0x79286a8, 0xc000056098, 0xc001f1e678, 0xc002bb8d20, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:657 +0x159 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x79286a8, 0xc000056098, 0xc001f1e601, 0xc001f1e678, 0xc002bb8d20, 0x686a460, 0xc002bb8d20) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:591 +0xa5 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x79286a8, 0xc000056098, 0x2540be400, 0x8bb2c97000, 0xc002bb8d20, 0x6beee60, 0x1) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2540be400, 0x8bb2c97000, 0xc003d2db60, 0x71923ee, 0x35) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning(0x79bc3e8, 0xc001ed4580, 0x100000002, 0xc000442a00) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:35 +0x9d k8s.io/kubernetes/test/e2e/framework/statefulset.Saturate(0x79bc3e8, 0xc001ed4580, 0xc000442a00) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:179 +0xd4 k8s.io/kubernetes/test/e2e/apps.glob..func9.2.3() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:138 +0x2cb k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000c82600) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000c82600) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000c82600, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 E0517 14:16:22.823245 86456 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"May 17 14:16:22.821: Unexpected error:\n <*errors.StatusError | 0xc0007a2140>: {\n ErrStatus: {\n TypeMeta: {Kind: \"\", APIVersion: \"\"},\n ListMeta: {\n SelfLink: \"\",\n ResourceVersion: \"\",\n Continue: \"\",\n RemainingItemCount: nil,\n },\n Status: \"Failure\",\n Message: \"etcdserver: request timed out\",\n Reason: \"\",\n Details: nil,\n Code: 500,\n },\n }\n etcdserver: request timed out\noccurred", Filename:"/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go", Line:68, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList(0x79bc3e8, 0xc001ed4580, 0xc000442a00, 0x42)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:68 +0x21e\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1(0xc001f8b7a0, 0xc001f8b7a0, 0xc001f8b7a0)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:37 +0x67\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x79286a8, 0xc000056098, 0xc001242e8c, 0x1, 0x2)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x79286a8, 0xc000056098, 0xc002bb8d20, 0x0, 0x0, 0x0)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext(0x79286a8, 0xc000056098, 0xc001f1e678, 0xc002bb8d20, 0x0, 0x0)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:657 +0x159\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x79286a8, 0xc000056098, 0xc001f1e601, 0xc001f1e678, 0xc002bb8d20, 0x686a460, 0xc002bb8d20)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:591 +0xa5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x79286a8, 0xc000056098, 0x2540be400, 0x8bb2c97000, 0xc002bb8d20, 0x6beee60, 0x1)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2540be400, 0x8bb2c97000, 0xc003d2db60, 0x71923ee, 0x35)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning(0x79bc3e8, 0xc001ed4580, 0x100000002, 0xc000442a00)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:35 +0x9d\nk8s.io/kubernetes/test/e2e/framework/statefulset.Saturate(0x79bc3e8, 0xc001ed4580, 0xc000442a00)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:179 +0xd4\nk8s.io/kubernetes/test/e2e/apps.glob..func9.2.3()\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:138 +0x2cb\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000c82600)\n\t_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000c82600)\n\t_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b\ntesting.tRunner(0xc000c82600, 0x72e36d8)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 120 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6c03da0, 0xc002d24300) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86 panic(0x6c03da0, 0xc002d24300) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc002cf8480, 0x224, 0x8985320, 0x71, 0x44, 0xc00215cd80, 0xc82) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5 panic(0x6330600, 0x77df3e0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc002cf8480, 0x224, 0xc0012423c0, 0x1, 0x1) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc002cf8480, 0x224, 0xc0012424a8, 0x1, 0x1) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5 k8s.io/kubernetes/test/e2e/framework.Fail(0xc002cf8240, 0x20f, 0xc0022161c0, 0x1, 0x1) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:62 +0x1ea k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc001242630, 0x7914db8, 0xa0e1f88, 0x0, 0x0, 0x0, 0x0, 0xc0007a2140) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:79 +0x216 k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc001242630, 0x7914db8, 0xa0e1f88, 0x0, 0x0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x78b1d40, 0xc0007a2140, 0x0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:46 +0xe7 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:40 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList(0x79bc3e8, 0xc001ed4580, 0xc000442a00, 0x42) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:68 +0x21e k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1(0xc001f8b7a0, 0xc001f8b7a0, 0xc001f8b7a0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:37 +0x67 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x79286a8, 0xc000056098, 0xc001242e8c, 0x1, 0x2) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x79286a8, 0xc000056098, 0xc002bb8d20, 0x0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext(0x79286a8, 0xc000056098, 0xc001f1e678, 0xc002bb8d20, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:657 +0x159 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x79286a8, 0xc000056098, 0xc001f1e601, 0xc001f1e678, 0xc002bb8d20, 0x686a460, 0xc002bb8d20) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:591 +0xa5 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x79286a8, 0xc000056098, 0x2540be400, 0x8bb2c97000, 0xc002bb8d20, 0x6beee60, 0x1) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2540be400, 0x8bb2c97000, 0xc003d2db60, 0x71923ee, 0x35) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning(0x79bc3e8, 0xc001ed4580, 0x100000002, 0xc000442a00) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:35 +0x9d k8s.io/kubernetes/test/e2e/framework/statefulset.Saturate(0x79bc3e8, 0xc001ed4580, 0xc000442a00) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:179 +0xd4 k8s.io/kubernetes/test/e2e/apps.glob..func9.2.3() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:138 +0x2cb k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0003ee7e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0003ee7e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc00053fe60, 0x78adcc0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00320b4a0, 0x0, 0x78adcc0, 0xc000070880) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00320b4a0, 0x78adcc0, 0xc000070880) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc003662000, 0xc00320b4a0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc003662000, 0x1) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc003662000, 0xc003656030) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001a0070, 0x7fa5780da140, 0xc000c82600, 0x70ae1f5, 0x14, 0xc000a12060, 0x3, 0x3, 0x79634d8, 0xc000070880, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:79 +0x546 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x78b37c0, 0xc000c82600, 0x70ae1f5, 0x14, 0xc000a16000, 0x3, 0x4, 0x4) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:219 +0x218 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x78b37c0, 0xc000c82600, 0x70ae1f5, 0x14, 0xc0004147c0, 0x2, 0x2, 0x25) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:207 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000c82600) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000c82600) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000c82600, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 May 17 14:16:25.591: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://127.0.0.1:44135 --kubeconfig=/root/.kube/kind-test-config --namespace=statefulset-8678 describe po ss-0' May 17 14:16:25.984: INFO: stderr: "" May 17 14:16:25.984: INFO: stdout: "Name: ss-0\nNamespace: statefulset-8678\nPriority: 0\nNode: kind-worker2/172.18.0.3\nStart Time: Tue, 17 May 2022 14:15:25 +0000\nLabels: baz=blah\n controller-revision-hash=ss-696cb77d7d\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: <none>\nStatus: Running\nIP: 10.244.2.46\nIPs:\n IP: 10.244.2.46\nControlled By: StatefulSet/ss\nContainers:\n webserver:\n Container ID: containerd://23437b613c848cddc0674dde7caacdab15e1b0e85a7968a715d99a1e486934dc\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Tue, 17 May 2022 14:15:28 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [test -f /data/statefulset-continue] delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /data/ from datadir (rw)\n /home from home (rw)\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gwfkj (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n datadir:\n Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)\n ClaimName: datadir-ss-0\n ReadOnly: false\n home:\n Type: HostPath (bare host directory volume)\n Path: /tmp/home\n HostPathType: \n kube-api-access-gwfkj:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 60s default-scheduler Successfully assigned statefulset-8678/ss-0 to kind-worker2\n Normal Pulled 57s kubelet Container image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" already present on machine\n Normal Created 57s kubelet Created container webserver\n Normal Started 57s kubelet Started container webserver\n Warning Unhealthy 45s (x14 over 56s) kubelet Readiness probe failed:\n" May 17 14:16:25.984: INFO: Output of kubectl describe ss-0: Name: ss-0 Namespace: statefulset-8678 Priority: 0 Node: kind-worker2/172.18.0.3 Start Time: Tue, 17 May 2022 14:15:25 +0000 Labels: baz=blah controller-revision-hash=ss-696cb77d7d foo=bar statefulset.kubernetes.io/pod-name=ss-0 Annotations: <none> Status: Running IP: 10.244.2.46 IPs: IP: 10.244.2.46 Controlled By: StatefulSet/ss Containers: webserver: Container ID: containerd://23437b613c848cddc0674dde7caacdab15e1b0e85a7968a715d99a1e486934dc Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 Port: <none> Host Port: <none> State: Running Started: Tue, 17 May 2022 14:15:28 +0000 Ready: True Restart Count: 0 Readiness: exec [test -f /data/statefulset-continue] delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /data/ from datadir (rw) /home from home (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gwfkj (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: datadir: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: datadir-ss-0 ReadOnly: false home: Type: HostPath (bare host directory volume) Path: /tmp/home HostPathType: kube-api-access-gwfkj: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 60s default-scheduler Successfully assigned statefulset-8678/ss-0 to kind-worker2 Normal Pulled 57s kubelet Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine Normal Created 57s kubelet Created container webserver Normal Started 57s kubelet Started container webserver Warning Unhealthy 45s (x14 over 56s) kubelet Readiness probe failed: May 17 14:16:25.984: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://127.0.0.1:44135 --kubeconfig=/root/.kube/kind-test-config --namespace=statefulset-8678 logs ss-0 --tail=100' May 17 14:16:26.387: INFO: stderr: "" May 17 14:16:26.388: INFO: stdout: "[Tue May 17 14:15:28.787298 2022] [mpm_event:notice] [pid 1:tid 140576475077480] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Tue May 17 14:15:28.787384 2022] [core:notice] [pid 1:tid 140576475077480] AH00094: Command line: 'httpd -D FOREGROUND'\n" May 17 14:16:26.388: INFO: Last 100 log lines of ss-0: [Tue May 17 14:15:28.787298 2022] [mpm_event:notice] [pid 1:tid 140576475077480] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Tue May 17 14:15:28.787384 2022] [core:notice] [pid 1:tid 140576475077480] AH00094: Command line: 'httpd -D FOREGROUND' May 17 14:16:26.388: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://127.0.0.1:44135 --kubeconfig=/root/.kube/kind-test-config --namespace=statefulset-8678 describe po ss-1' May 17 14:16:26.683: INFO: stderr: "" May 17 14:16:26.683: INFO: stdout: "Name: ss-1\nNamespace: statefulset-8678\nPriority: 0\nNode: kind-worker/\nLabels: baz=blah\n controller-revision-hash=ss-696cb77d7d\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-1\nAnnotations: <none>\nStatus: Pending\nIP: \nIPs: <none>\nControlled By: StatefulSet/ss\nContainers:\n webserver:\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Port: <none>\n Host Port: <none>\n Readiness: exec [test -f /data/statefulset-continue] delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /data/ from datadir (rw)\n /home from home (rw)\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r96rc (ro)\nConditions:\n Type Status\n PodScheduled True \nVolumes:\n datadir:\n Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)\n ClaimName: datadir-ss-1\n ReadOnly: false\n home:\n Type: HostPath (bare host directory volume)\n Path: /tmp/home\n HostPathType: \n kube-api-access-r96rc:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 20s default-scheduler Successfully assigned statefulset-8678/ss-1 to kind-worker\n" May 17 14:16:26.683: INFO: Output of kubectl describe ss-1: Name: ss-1 Namespace: statefulset-8678 Priority: 0 Node: kind-worker/ Labels: baz=blah controller-revision-hash=ss-696cb77d7d foo=bar statefulset.kubernetes.io/pod-name=ss-1 Annotations: <none> Status: Pending IP: IPs: <none> Controlled By: StatefulSet/ss Containers: webserver: Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Port: <none> Host Port: <none> Readiness: exec [test -f /data/statefulset-continue] delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /data/ from datadir (rw) /home from home (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r96rc (ro) Conditions: Type Status PodScheduled True Volumes: datadir: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: datadir-ss-1 ReadOnly: false home: Type: HostPath (bare host directory volume) Path: /tmp/home HostPathType: kube-api-access-r96rc: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 20s default-scheduler Successfully assigned statefulset-8678/ss-1 to kind-worker May 17 14:16:26.683: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://127.0.0.1:44135 --kubeconfig=/root/.kube/kind-test-config --namespace=statefulset-8678 logs ss-1 --tail=100' May 17 14:16:26.993: INFO: rc: 1 May 17 14:16:26.993: INFO: Last 100 log lines of ss-1: May 17 14:16:26.993: INFO: Deleting all statefulset in ns statefulset-8678 May 17 14:16:27.071: INFO: Scaling statefulset ss to 0 May 17 14:17:17.280: INFO: Waiting for statefulset status.replicas updated to 0 May 17 14:17:17.311: INFO: Deleting statefulset ss May 17 14:17:17.460: INFO: Deleting pvc: datadir-ss-0 with volume pvc-edde1afa-8410-48c8-b9aa-fa632a3bc1ec May 17 14:17:17.574: INFO: Deleting pvc: datadir-ss-1 with volume pvc-8dd50b3d-3c93-4d89-9129-27289e812806 May 17 14:17:17.690: INFO: Still waiting for pvs of statefulset to disappear: pvc-8dd50b3d-3c93-4d89-9129-27289e812806: {Phase:Bound Message: Reason:} pvc-edde1afa-8410-48c8-b9aa-fa632a3bc1ec: {Phase:Bound Message: Reason:} May 17 14:17:27.695: INFO: Still waiting for pvs of statefulset to disappear: pvc-8dd50b3d-3c93-4d89-9129-27289e812806: {Phase:Released Message: Reason:} pvc-edde1afa-8410-48c8-b9aa-fa632a3bc1ec: {Phase:Released Message: Reason:} May 17 14:17:37.721: INFO: Still waiting for pvs of statefulset to disappear: pvc-8dd50b3d-3c93-4d89-9129-27289e812806: {Phase:Released Message: Reason:} [AfterEach] [sig-apps] StatefulSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "statefulset-8678". �[1mSTEP�[0m: Found 28 events. May 17 14:17:47.871: INFO: At 2022-05-17 14:15:10 +0000 UTC - event for datadir-ss-0: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 17 14:17:47.871: INFO: At 2022-05-17 14:15:10 +0000 UTC - event for datadir-ss-0: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator May 17 14:17:47.871: INFO: At 2022-05-17 14:15:10 +0000 UTC - event for datadir-ss-0: {rancher.io/local-path_local-path-provisioner-6c9449b9dd-bllk2_47e63468-82a2-46db-ab76-3062a5e1f46d } Provisioning: External provisioner is provisioning volume for claim "statefulset-8678/datadir-ss-0" May 17 14:17:47.871: INFO: At 2022-05-17 14:15:10 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success May 17 14:17:47.871: INFO: At 2022-05-17 14:15:10 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful May 17 14:17:47.871: INFO: At 2022-05-17 14:15:24 +0000 UTC - event for datadir-ss-0: {rancher.io/local-path_local-path-provisioner-6c9449b9dd-bllk2_47e63468-82a2-46db-ab76-3062a5e1f46d } ProvisioningSucceeded: Successfully provisioned volume pvc-edde1afa-8410-48c8-b9aa-fa632a3bc1ec May 17 14:17:47.871: INFO: At 2022-05-17 14:15:25 +0000 UTC - event for ss-0: {default-scheduler } Scheduled: Successfully assigned statefulset-8678/ss-0 to kind-worker2 May 17 14:17:47.871: INFO: At 2022-05-17 14:15:28 +0000 UTC - event for ss-0: {kubelet kind-worker2} Started: Started container webserver May 17 14:17:47.871: INFO: At 2022-05-17 14:15:28 +0000 UTC - event for ss-0: {kubelet kind-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine May 17 14:17:47.871: INFO: At 2022-05-17 14:15:28 +0000 UTC - event for ss-0: {kubelet kind-worker2} Created: Created container webserver May 17 14:17:47.872: INFO: At 2022-05-17 14:15:29 +0000 UTC - event for ss-0: {kubelet kind-worker2} Unhealthy: Readiness probe failed: May 17 14:17:47.872: INFO: At 2022-05-17 14:15:43 +0000 UTC - event for datadir-ss-1: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator May 17 14:17:47.872: INFO: At 2022-05-17 14:15:43 +0000 UTC - event for datadir-ss-1: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 17 14:17:47.872: INFO: At 2022-05-17 14:15:43 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-1 in StatefulSet ss successful May 17 14:17:47.872: INFO: At 2022-05-17 14:15:43 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Claim datadir-ss-1 Pod ss-1 in StatefulSet ss success May 17 14:17:47.872: INFO: At 2022-05-17 14:15:44 +0000 UTC - event for datadir-ss-1: {rancher.io/local-path_local-path-provisioner-6c9449b9dd-bllk2_47e63468-82a2-46db-ab76-3062a5e1f46d } Provisioning: External provisioner is provisioning volume for claim "statefulset-8678/datadir-ss-1" May 17 14:17:47.872: INFO: At 2022-05-17 14:16:05 +0000 UTC - event for datadir-ss-1: {rancher.io/local-path_local-path-provisioner-6c9449b9dd-bllk2_47e63468-82a2-46db-ab76-3062a5e1f46d } ProvisioningSucceeded: Successfully provisioned volume pvc-8dd50b3d-3c93-4d89-9129-27289e812806 May 17 14:17:47.872: INFO: At 2022-05-17 14:16:06 +0000 UTC - event for ss-1: {default-scheduler } Scheduled: Successfully assigned statefulset-8678/ss-1 to kind-worker May 17 14:17:47.872: INFO: At 2022-05-17 14:16:22 +0000 UTC - event for ss-1: {kubelet kind-worker} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-r96rc" : failed to fetch token: etcdserver: request timed out May 17 14:17:47.872: INFO: At 2022-05-17 14:16:23 +0000 UTC - event for ss-1: {kubelet kind-worker} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-r96rc" : failed to fetch token: Post "https://kind-control-plane:6443/api/v1/namespaces/statefulset-8678/serviceaccounts/default/token": read tcp 172.18.0.2:59596->172.18.0.4:6443: use of closed network connection May 17 14:17:47.872: INFO: At 2022-05-17 14:16:27 +0000 UTC - event for ss-1: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine May 17 14:17:47.872: INFO: At 2022-05-17 14:16:28 +0000 UTC - event for ss-1: {kubelet kind-worker} Created: Created container webserver May 17 14:17:47.872: INFO: At 2022-05-17 14:16:29 +0000 UTC - event for ss-1: {kubelet kind-worker} Started: Started container webserver May 17 14:17:47.872: INFO: At 2022-05-17 14:16:30 +0000 UTC - event for ss-1: {kubelet kind-worker} Unhealthy: Readiness probe failed: May 17 14:17:47.872: INFO: At 2022-05-17 14:16:59 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-1 in StatefulSet ss successful May 17 14:17:47.872: INFO: At 2022-05-17 14:17:04 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful May 17 14:17:47.872: INFO: At 2022-05-17 14:17:04 +0000 UTC - event for ss-0: {kubelet kind-worker2} Killing: Stopping container webserver May 17 14:17:47.872: INFO: At 2022-05-17 14:17:05 +0000 UTC - event for ss-0: {kubelet kind-worker2} Unhealthy: Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state May 17 14:17:47.951: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:17:47.951: INFO: May 17 14:17:47.972: INFO: Logging node info for node kind-control-plane May 17 14:17:48.015: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 1fc1c875-88c2-4f79-b48f-f6ef32b124da 586 0 2022-05-17 14:12:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-17 14:12:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-17 14:12:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:13:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:97599c4ddadc4d3eae566d304d7d104d,SystemUUID:e786229d-3adc-40a9-a01d-f7a176ae9288,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:17:48.015: INFO: Logging kubelet events for node kind-control-plane May 17 14:17:48.083: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 17 14:17:48.151: INFO: local-path-provisioner-6c9449b9dd-bllk2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.151: INFO: Container local-path-provisioner ready: true, restart count 0 May 17 14:17:48.151: INFO: coredns-78fcd69978-r97ls started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.151: INFO: Container coredns ready: true, restart count 0 May 17 14:17:48.151: INFO: kube-proxy-5tzjs started at 2022-05-17 14:13:26 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.151: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:17:48.151: INFO: coredns-78fcd69978-b6zn2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.151: INFO: Container coredns ready: true, restart count 0 May 17 14:17:48.151: INFO: kube-apiserver-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.151: INFO: Container kube-apiserver ready: true, restart count 0 May 17 14:17:48.151: INFO: kube-controller-manager-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.151: INFO: Container kube-controller-manager ready: true, restart count 1 May 17 14:17:48.151: INFO: kube-scheduler-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.151: INFO: Container kube-scheduler ready: true, restart count 1 May 17 14:17:48.151: INFO: kindnet-s6wb7 started at 2022-05-17 14:13:07 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.151: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:17:48.151: INFO: etcd-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.151: INFO: Container etcd ready: true, restart count 0 May 17 14:17:48.393: INFO: Latency metrics for node kind-control-plane May 17 14:17:48.393: INFO: Logging node info for node kind-worker May 17 14:17:48.443: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5ba69ac9-ccd5-447d-9f27-9bee10b23121 9339 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-6365":"csi-mock-csi-mock-volumes-6365"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-05-17 14:17:15 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-17 14:17:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:17:17 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:17:17 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:17:17 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:17:17 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7e8057f975041deb201a063c162c158,SystemUUID:fc207ccb-2e60-4380-b38c-671b606f274a,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-6365^4],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-6365^4,DevicePath:,},},Config:nil,},} May 17 14:17:48.443: INFO: Logging kubelet events for node kind-worker May 17 14:17:48.489: INFO: Logging pods the kubelet thinks is on node kind-worker May 17 14:17:48.525: INFO: kindnet-56p79 started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.525: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:17:48.525: INFO: verify-service-up-exec-pod-wp2dt started at <nil> (0+0 container statuses recorded) May 17 14:17:48.525: INFO: busybox-24b48429-466f-4c42-b2f1-c9118801e253 started at 2022-05-17 14:17:10 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.525: INFO: Container busybox ready: false, restart count 0 May 17 14:17:48.525: INFO: csi-mockplugin-attacher-0 started at 2022-05-17 14:17:02 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.525: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:17:48.525: INFO: pvc-volume-tester-qg55w started at 2022-05-17 14:17:14 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.525: INFO: Container volume-tester ready: true, restart count 0 May 17 14:17:48.525: INFO: test-deployment-d4dfddfbf-mlnlt started at 2022-05-17 14:17:45 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.525: INFO: Container test-deployment ready: true, restart count 0 May 17 14:17:48.525: INFO: dns-test-0573869c-d151-40f3-96d9-b55c33f9783c started at <nil> (0+0 container statuses recorded) May 17 14:17:48.525: INFO: affinity-clusterip-t7cjz started at 2022-05-17 14:17:14 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.525: INFO: Container affinity-clusterip ready: true, restart count 0 May 17 14:17:48.525: INFO: affinity-clusterip-5wd5j started at 2022-05-17 14:17:14 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.525: INFO: Container affinity-clusterip ready: false, restart count 0 May 17 14:17:48.525: INFO: test-deployment-855f7994f9-8j2fg started at 2022-05-17 14:17:14 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.525: INFO: Container test-deployment ready: true, restart count 0 May 17 14:17:48.525: INFO: startup-77b87ccc-217f-4a4f-9d51-01a8445ab4a1 started at 2022-05-17 14:17:23 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.525: INFO: Container busybox ready: false, restart count 0 May 17 14:17:48.525: INFO: test-deployment-56c98d85f9-p4bxc started at 2022-05-17 14:17:37 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.525: INFO: Container test-deployment ready: true, restart count 0 May 17 14:17:48.525: INFO: hostexec-kind-worker-95kts started at 2022-05-17 14:17:38 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.525: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:17:48.525: INFO: execpodvgffr started at 2022-05-17 14:17:43 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.525: INFO: Container agnhost-container ready: false, restart count 0 May 17 14:17:48.525: INFO: csi-mockplugin-0 started at 2022-05-17 14:17:02 +0000 UTC (0+3 container statuses recorded) May 17 14:17:48.525: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:17:48.525: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:17:48.525: INFO: Container mock ready: true, restart count 0 May 17 14:17:48.525: INFO: webhook-to-be-mutated started at 2022-05-17 14:17:17 +0000 UTC (1+1 container statuses recorded) May 17 14:17:48.525: INFO: Init container webhook-added-init-container ready: false, restart count 0 May 17 14:17:48.525: INFO: Container example ready: false, restart count 0 May 17 14:17:48.525: INFO: externalsvc-b8fmx started at 2022-05-17 14:17:43 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.525: INFO: Container externalsvc ready: false, restart count 0 May 17 14:17:48.525: INFO: pod-subpath-test-inlinevolume-nb4t started at <nil> (0+0 container statuses recorded) May 17 14:17:48.526: INFO: service-proxy-toggled-mlq7q started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.526: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:17:48.526: INFO: externalname-service-hg5wl started at 2022-05-17 14:17:31 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.526: INFO: Container externalname-service ready: true, restart count 0 May 17 14:17:48.526: INFO: helper-pod-delete-pvc-5edaa4a3-0815-47e7-9dbd-29dd3fc82335 started at 2022-05-17 14:17:42 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.526: INFO: Container helper-pod ready: false, restart count 0 May 17 14:17:48.526: INFO: service-proxy-disabled-qk4md started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.526: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:17:48.526: INFO: pod-qos-class-5c206c25-c993-4e12-9d89-06931c05bb47 started at 2022-05-17 14:16:07 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.526: INFO: Container agnhost ready: false, restart count 0 May 17 14:17:48.526: INFO: svc-latency-rc-fpmnc started at 2022-05-17 14:17:32 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.526: INFO: Container svc-latency-rc ready: true, restart count 0 May 17 14:17:48.526: INFO: pod-exec-websocket-766d5ad4-7c24-4744-98dd-2378fe5fa65d started at 2022-05-17 14:17:37 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.526: INFO: Container main ready: true, restart count 0 May 17 14:17:48.526: INFO: kube-proxy-dr9f5 started at 2022-05-17 14:13:23 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.526: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:17:48.526: INFO: concurrent-27546616-5t86w started at 2022-05-17 14:16:00 +0000 UTC (0+1 container statuses recorded) May 17 14:17:48.526: INFO: Container c ready: false, restart count 0 May 17 14:17:48.966: INFO: Latency metrics for node kind-worker May 17 14:17:48.966: INFO: Logging node info for node kind-worker2 May 17 14:17:48.980: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 d009e97f-d73b-4ed0-8afd-9f776bcb704d 9467 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-811":"csi-mock-csi-mock-volumes-811"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-17 14:15:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:16:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:16:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:16:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:16:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:521433a178fd4494a7881394e8d644cf,SystemUUID:c86f292f-55cb-49c2-9dda-efb91a06dab9,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:17:48.980: INFO: Logging kubelet events for node kind-worker2 May 17 14:17:49.070: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 17 14:17:49.112: INFO: kube-proxy-sdmmb started at 2022-05-17 14:13:29 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:17:49.112: INFO: pvc-volume-tester-9wj4v started at 2022-05-17 14:17:19 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container volume-tester ready: true, restart count 0 May 17 14:17:49.112: INFO: hostexec-kind-worker2-7bqcw started at 2022-05-17 14:17:07 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:17:49.112: INFO: pod-af64d1e2-06c3-4fc4-acb8-ea4253ac1660 started at 2022-05-17 14:17:30 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container write-pod ready: true, restart count 0 May 17 14:17:49.112: INFO: hostexec-kind-worker2-dkdr8 started at <nil> (0+0 container statuses recorded) May 17 14:17:49.112: INFO: test-deployment-d4dfddfbf-clrmr started at <nil> (0+0 container statuses recorded) May 17 14:17:49.112: INFO: inline-volume-zsnxb started at 2022-05-17 14:17:36 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container volume-tester ready: false, restart count 0 May 17 14:17:49.112: INFO: service-proxy-toggled-djh27 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:17:49.112: INFO: externalname-service-w2dn6 started at 2022-05-17 14:17:31 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container externalname-service ready: true, restart count 0 May 17 14:17:49.112: INFO: kindnet-mq8qb started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:17:49.112: INFO: test-pod started at 2022-05-17 14:17:31 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container test-container ready: false, restart count 0 May 17 14:17:49.112: INFO: test-webserver-4226f15c-cd18-4e90-bf31-e079c9669de0 started at 2022-05-17 14:16:59 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container test-webserver ready: true, restart count 0 May 17 14:17:49.112: INFO: verify-service-up-host-exec-pod started at 2022-05-17 14:17:33 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:17:49.112: INFO: hostexec-kind-worker2-87ff2 started at 2022-05-17 14:17:18 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:17:49.112: INFO: externalsvc-5n96d started at 2022-05-17 14:17:43 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container externalsvc ready: true, restart count 0 May 17 14:17:49.112: INFO: csi-mockplugin-0 started at 2022-05-17 14:15:19 +0000 UTC (0+4 container statuses recorded) May 17 14:17:49.112: INFO: Container busybox ready: true, restart count 0 May 17 14:17:49.112: INFO: Container csi-provisioner ready: true, restart count 2 May 17 14:17:49.112: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:17:49.112: INFO: Container mock ready: true, restart count 0 May 17 14:17:49.112: INFO: csi-mockplugin-0 started at 2022-05-17 14:17:02 +0000 UTC (0+3 container statuses recorded) May 17 14:17:49.112: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:17:49.112: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:17:49.112: INFO: Container mock ready: true, restart count 0 May 17 14:17:49.112: INFO: ss-0 started at 2022-05-17 14:17:34 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container webserver ready: false, restart count 0 May 17 14:17:49.112: INFO: affinity-clusterip-vzw5g started at 2022-05-17 14:17:14 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container affinity-clusterip ready: true, restart count 0 May 17 14:17:49.112: INFO: hostexec-kind-worker2-dcg9m started at <nil> (0+0 container statuses recorded) May 17 14:17:49.112: INFO: pod-subpath-test-preprovisionedpv-lqhb started at 2022-05-17 14:17:45 +0000 UTC (0+2 container statuses recorded) May 17 14:17:49.112: INFO: Container test-container-subpath-preprovisionedpv-lqhb ready: false, restart count 0 May 17 14:17:49.112: INFO: Container test-container-volume-preprovisionedpv-lqhb ready: false, restart count 0 May 17 14:17:49.112: INFO: service-proxy-toggled-wrl7r started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:17:49.112: INFO: test-deployment-56c98d85f9-s9c26 started at <nil> (0+0 container statuses recorded) May 17 14:17:49.112: INFO: csi-hostpathplugin-0 started at <nil> (0+0 container statuses recorded) May 17 14:17:49.112: INFO: service-proxy-disabled-brdhb started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:17:49.112: INFO: tester started at 2022-05-17 14:17:07 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container tester ready: true, restart count 0 May 17 14:17:49.112: INFO: service-proxy-disabled-xzkc8 started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:17:49.112: INFO: hostexec-kind-worker2-chzrg started at 2022-05-17 14:17:12 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:17:49.112: INFO: sample-webhook-deployment-78988fc6cd-kpqm2 started at 2022-05-17 14:17:30 +0000 UTC (0+1 container statuses recorded) May 17 14:17:49.112: INFO: Container sample-webhook ready: true, restart count 0 May 17 14:17:49.450: INFO: Latency metrics for node kind-worker2 May 17 14:17:49.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-8678" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-node\]\sEvents\sshould\sbe\ssent\sby\skubelets\sand\sthe\sscheduler\sabout\spods\sscheduling\sand\srunning\s\s\[Conformance\]$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 17 14:16:22.808: Unexpected error: <*errors.StatusError | 0xc000df9f40>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/events.go:80from junit_23.xml
[BeforeEach] [sig-node] Events /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 17 14:16:06.094: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes May 17 14:16:22.808: FAIL: Unexpected error: <*errors.StatusError | 0xc000df9f40>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/node.glob..func3.1() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/events.go:80 +0x585 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000529680) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000529680) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000529680, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Events /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "events-7130". �[1mSTEP�[0m: Found 1 events. May 17 14:16:25.731: INFO: At 2022-05-17 14:16:06 +0000 UTC - event for send-events-4adc3414-703f-4b62-a42c-926d4b4b6fc2: {default-scheduler } Scheduled: Successfully assigned events-7130/send-events-4adc3414-703f-4b62-a42c-926d4b4b6fc2 to kind-worker May 17 14:16:25.793: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:16:25.793: INFO: send-events-4adc3414-703f-4b62-a42c-926d4b4b6fc2 kind-worker Pending 30s [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:16:06 +0000 UTC }] May 17 14:16:25.793: INFO: May 17 14:16:25.844: INFO: Logging node info for node kind-control-plane May 17 14:16:25.958: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 1fc1c875-88c2-4f79-b48f-f6ef32b124da 586 0 2022-05-17 14:12:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-17 14:12:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-17 14:12:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:13:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:97599c4ddadc4d3eae566d304d7d104d,SystemUUID:e786229d-3adc-40a9-a01d-f7a176ae9288,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:25.958: INFO: Logging kubelet events for node kind-control-plane May 17 14:16:25.971: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 17 14:16:26.067: INFO: etcd-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.067: INFO: Container etcd ready: true, restart count 0 May 17 14:16:26.067: INFO: kube-apiserver-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.067: INFO: Container kube-apiserver ready: true, restart count 0 May 17 14:16:26.067: INFO: kube-controller-manager-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.067: INFO: Container kube-controller-manager ready: false, restart count 0 May 17 14:16:26.067: INFO: kube-scheduler-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.067: INFO: Container kube-scheduler ready: false, restart count 0 May 17 14:16:26.067: INFO: kindnet-s6wb7 started at 2022-05-17 14:13:07 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.067: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:26.067: INFO: coredns-78fcd69978-b6zn2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.067: INFO: Container coredns ready: true, restart count 0 May 17 14:16:26.067: INFO: local-path-provisioner-6c9449b9dd-bllk2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.067: INFO: Container local-path-provisioner ready: true, restart count 0 May 17 14:16:26.067: INFO: coredns-78fcd69978-r97ls started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.067: INFO: Container coredns ready: true, restart count 0 May 17 14:16:26.067: INFO: kube-proxy-5tzjs started at 2022-05-17 14:13:26 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.067: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:26.500: INFO: Latency metrics for node kind-control-plane May 17 14:16:26.500: INFO: Logging node info for node kind-worker May 17 14:16:26.597: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5ba69ac9-ccd5-447d-9f27-9bee10b23121 6999 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9914":"kind-worker","csi-mock-csi-mock-volumes-7782":"csi-mock-csi-mock-volumes-7782"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-05-17 14:15:50 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-17 14:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7e8057f975041deb201a063c162c158,SystemUUID:fc207ccb-2e60-4380-b38c-671b606f274a,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155,DevicePath:,},},Config:nil,},} May 17 14:16:26.598: INFO: Logging kubelet events for node kind-worker May 17 14:16:26.669: INFO: Logging pods the kubelet thinks is on node kind-worker May 17 14:16:26.835: INFO: csi-mockplugin-0 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.835: INFO: implicit-nonroot-uid started at <nil> (0+0 container statuses recorded) May 17 14:16:26.835: INFO: ss-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.835: INFO: kindnet-56p79 started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.835: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:26.835: INFO: affinity-nodeport-xj24q started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.835: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:26.835: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:15:44 +0000 UTC (0+7 container statuses recorded) May 17 14:16:26.835: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:26.835: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:26.835: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:26.835: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:26.835: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:26.835: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:26.835: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:26.835: INFO: deployment-55649fd747-6dbnz started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.835: INFO: Container nginx ready: false, restart count 0 May 17 14:16:26.835: INFO: send-events-4adc3414-703f-4b62-a42c-926d4b4b6fc2 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.836: INFO: termination-message-container5de5961b-b8bb-4868-adb0-b23802f044ae started at <nil> (0+0 container statuses recorded) May 17 14:16:26.836: INFO: service-proxy-toggled-mlq7q started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.836: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:26.836: INFO: hostexec-kind-worker-6t6pq started at 2022-05-17 14:15:52 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.836: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.836: INFO: agnhost-primary-5db8ddd565-ktltd started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.836: INFO: Container primary ready: true, restart count 0 May 17 14:16:26.836: INFO: agnhost-replica-6bcf79b489-885sh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.836: INFO: Container replica ready: true, restart count 0 May 17 14:16:26.836: INFO: deployment-55649fd747-f5bss started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.836: INFO: Container nginx ready: false, restart count 0 May 17 14:16:26.836: INFO: pod-terminate-status-2-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.836: INFO: service-proxy-disabled-qk4md started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.836: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:26.836: INFO: busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 started at 2022-05-17 14:15:24 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.836: INFO: Container busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 ready: true, restart count 0 May 17 14:16:26.836: INFO: pod-b8564c48-8aa2-4bdb-a5eb-256f8afd47cc started at <nil> (0+0 container statuses recorded) May 17 14:16:26.836: INFO: concurrent-27546615-rw8md started at 2022-05-17 14:15:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.836: INFO: Container c ready: true, restart count 0 May 17 14:16:26.836: INFO: hostexec-kind-worker-kbzwd started at <nil> (0+0 container statuses recorded) May 17 14:16:26.836: INFO: service-headless-tgqlh started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.836: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:26.836: INFO: pod-qos-class-5c206c25-c993-4e12-9d89-06931c05bb47 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.836: INFO: service-headless-q5g89 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.836: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:26.836: INFO: service-headless-toggled-9zh79 started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.836: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:26.836: INFO: service-headless-toggled-nw4rn started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.836: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:26.836: INFO: inline-volume-tester-h6pnw started at <nil> (0+0 container statuses recorded) May 17 14:16:26.836: INFO: frontend-685fc574d5-64jqm started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.836: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:26.836: INFO: pod-terminate-status-0-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.836: INFO: kube-proxy-dr9f5 started at 2022-05-17 14:13:23 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.836: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:26.836: INFO: hostexec-kind-worker-28fdh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.836: INFO: Container agnhost-container ready: false, restart count 0 May 17 14:16:26.836: INFO: pod-client started at 2022-05-17 14:15:09 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.836: INFO: Container pod-client ready: true, restart count 0 May 17 14:16:26.836: INFO: liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931 started at 2022-05-17 14:14:13 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.836: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.836: INFO: concurrent-27546616-5t86w started at <nil> (0+0 container statuses recorded) May 17 14:16:26.836: INFO: hostexec-kind-worker-gzkkg started at 2022-05-17 14:15:51 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.836: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:28.112: INFO: Latency metrics for node kind-worker May 17 14:16:28.112: INFO: Logging node info for node kind-worker2 May 17 14:16:28.174: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 d009e97f-d73b-4ed0-8afd-9f776bcb704d 6998 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2591":"kind-worker2","csi-mock-csi-mock-volumes-8289":"csi-mock-csi-mock-volumes-8289"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-17 14:15:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:521433a178fd4494a7881394e8d644cf,SystemUUID:c86f292f-55cb-49c2-9dda-efb91a06dab9,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:28.175: INFO: Logging kubelet events for node kind-worker2 May 17 14:16:28.229: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 17 14:16:28.344: INFO: service-headless-9mh45 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.344: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:28.344: INFO: pod-a2ba308d-816a-406c-bf57-d5840e5d5387 started at 2022-05-17 14:15:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.344: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:28.344: INFO: affinity-nodeport-f2r7w started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.344: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:28.344: INFO: hostexec-kind-worker2-kwzz9 started at 2022-05-17 14:15:35 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.344: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:28.345: INFO: kindnet-mq8qb started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:28.345: INFO: pod-2d828178-7e03-4331-91b0-2165aa4eeee5 started at <nil> (0+0 container statuses recorded) May 17 14:16:28.345: INFO: csi-mockplugin-0 started at 2022-05-17 14:15:19 +0000 UTC (0+4 container statuses recorded) May 17 14:16:28.345: INFO: Container busybox ready: true, restart count 0 May 17 14:16:28.345: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:28.345: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:28.345: INFO: Container mock ready: true, restart count 0 May 17 14:16:28.345: INFO: hostexec-kind-worker2-k8cx7 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:28.345: INFO: pod-server-2 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:28.345: INFO: hostexec-kind-worker2-gh6gn started at 2022-05-17 14:15:19 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:28.345: INFO: ss-0 started at 2022-05-17 14:15:25 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container webserver ready: true, restart count 0 May 17 14:16:28.345: INFO: pod-23740531-141e-4062-9ddb-69c15ab1b060 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:28.345: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:13:56 +0000 UTC (0+7 container statuses recorded) May 17 14:16:28.345: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:28.345: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:28.345: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:28.345: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:28.345: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:28.345: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:28.345: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:28.345: INFO: service-proxy-toggled-wrl7r started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:28.345: INFO: ss2-0 started at 2022-05-17 14:16:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container webserver ready: false, restart count 0 May 17 14:16:28.345: INFO: rs-vkdrm started at 2022-05-17 14:14:49 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container donothing ready: false, restart count 0 May 17 14:16:28.345: INFO: pvc-volume-tester-wh448 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container volume-tester ready: true, restart count 0 May 17 14:16:28.345: INFO: pod-e8401689-4cef-4e20-9f2e-264844f9d704 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:28.345: INFO: service-proxy-disabled-xzkc8 started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:28.345: INFO: frontend-685fc574d5-lc6wl started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:28.345: INFO: frontend-685fc574d5-mj2wn started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:28.345: INFO: service-proxy-disabled-brdhb started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:28.345: INFO: csi-mockplugin-attacher-0 started at 2022-05-17 14:13:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:28.345: INFO: agnhost-replica-6bcf79b489-xd4h9 started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container replica ready: true, restart count 0 May 17 14:16:28.345: INFO: pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8 started at 2022-05-17 14:16:03 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:28.345: INFO: kube-proxy-sdmmb started at 2022-05-17 14:13:29 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:28.345: INFO: inline-volume-tester-tx6st started at 2022-05-17 14:13:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container csi-volume-tester ready: true, restart count 0 May 17 14:16:28.345: INFO: csi-mockplugin-0 started at 2022-05-17 14:13:58 +0000 UTC (0+3 container statuses recorded) May 17 14:16:28.345: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:28.345: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:28.345: INFO: Container mock ready: true, restart count 0 May 17 14:16:28.345: INFO: service-headless-toggled-dvx6b started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:28.345: INFO: service-proxy-toggled-djh27 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.345: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:30.881: INFO: Latency metrics for node kind-worker2 May 17 14:16:30.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-7130" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-node\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\stcp\:8080\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 17 14:16:22.832: getting pod Unexpected error: <*errors.StatusError | 0xc00098c5a0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:701from junit_16.xml
[BeforeEach] [sig-node] Probing container /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 17 14:14:13.005: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931 in namespace container-probe-5327 May 17 14:14:21.152: INFO: Started pod liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931 in namespace container-probe-5327 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present May 17 14:14:21.196: INFO: Initial restart count of pod liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931 is 0 May 17 14:16:22.832: FAIL: getting pod Unexpected error: <*errors.StatusError | 0xc00098c5a0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc00060a2c0, 0xc0017efc00, 0x0, 0x37e11d6000) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:701 +0xbaa k8s.io/kubernetes/test/e2e/common/node.glob..func2.7() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:178 +0x137 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000d7c300) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000d7c300) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000d7c300, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "container-probe-5327". �[1mSTEP�[0m: Found 4 events. May 17 14:16:25.564: INFO: At 2022-05-17 14:14:13 +0000 UTC - event for liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931: {default-scheduler } Scheduled: Successfully assigned container-probe-5327/liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931 to kind-worker May 17 14:16:25.564: INFO: At 2022-05-17 14:14:14 +0000 UTC - event for liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 17 14:16:25.564: INFO: At 2022-05-17 14:14:14 +0000 UTC - event for liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931: {kubelet kind-worker} Created: Created container agnhost-container May 17 14:16:25.564: INFO: At 2022-05-17 14:14:14 +0000 UTC - event for liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931: {kubelet kind-worker} Started: Started container agnhost-container May 17 14:16:25.660: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:16:25.660: INFO: liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931 kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:14:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:14:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:14:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:14:13 +0000 UTC }] May 17 14:16:25.660: INFO: May 17 14:16:25.738: INFO: Logging node info for node kind-control-plane May 17 14:16:25.794: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 1fc1c875-88c2-4f79-b48f-f6ef32b124da 586 0 2022-05-17 14:12:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-17 14:12:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-17 14:12:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:13:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:97599c4ddadc4d3eae566d304d7d104d,SystemUUID:e786229d-3adc-40a9-a01d-f7a176ae9288,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:25.795: INFO: Logging kubelet events for node kind-control-plane May 17 14:16:25.874: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 17 14:16:26.049: INFO: coredns-78fcd69978-b6zn2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.049: INFO: Container coredns ready: true, restart count 0 May 17 14:16:26.050: INFO: local-path-provisioner-6c9449b9dd-bllk2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.050: INFO: Container local-path-provisioner ready: true, restart count 0 May 17 14:16:26.050: INFO: coredns-78fcd69978-r97ls started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.050: INFO: Container coredns ready: true, restart count 0 May 17 14:16:26.050: INFO: kube-proxy-5tzjs started at 2022-05-17 14:13:26 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.050: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:26.050: INFO: etcd-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.050: INFO: Container etcd ready: true, restart count 0 May 17 14:16:26.050: INFO: kube-apiserver-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.050: INFO: Container kube-apiserver ready: true, restart count 0 May 17 14:16:26.050: INFO: kube-controller-manager-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.050: INFO: Container kube-controller-manager ready: false, restart count 0 May 17 14:16:26.050: INFO: kube-scheduler-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.050: INFO: Container kube-scheduler ready: false, restart count 0 May 17 14:16:26.050: INFO: kindnet-s6wb7 started at 2022-05-17 14:13:07 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.050: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:26.537: INFO: Latency metrics for node kind-control-plane May 17 14:16:26.537: INFO: Logging node info for node kind-worker May 17 14:16:26.644: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5ba69ac9-ccd5-447d-9f27-9bee10b23121 6999 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9914":"kind-worker","csi-mock-csi-mock-volumes-7782":"csi-mock-csi-mock-volumes-7782"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-05-17 14:15:50 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-17 14:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7e8057f975041deb201a063c162c158,SystemUUID:fc207ccb-2e60-4380-b38c-671b606f274a,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155,DevicePath:,},},Config:nil,},} May 17 14:16:26.645: INFO: Logging kubelet events for node kind-worker May 17 14:16:26.669: INFO: Logging pods the kubelet thinks is on node kind-worker May 17 14:16:26.867: INFO: deployment-55649fd747-6dbnz started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container nginx ready: false, restart count 0 May 17 14:16:26.867: INFO: agnhost-primary-5db8ddd565-ktltd started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container primary ready: true, restart count 0 May 17 14:16:26.867: INFO: send-events-4adc3414-703f-4b62-a42c-926d4b4b6fc2 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.867: INFO: termination-message-container5de5961b-b8bb-4868-adb0-b23802f044ae started at <nil> (0+0 container statuses recorded) May 17 14:16:26.867: INFO: service-proxy-toggled-mlq7q started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:26.867: INFO: hostexec-kind-worker-6t6pq started at 2022-05-17 14:15:52 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.867: INFO: agnhost-replica-6bcf79b489-885sh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container replica ready: true, restart count 0 May 17 14:16:26.867: INFO: hostexec-kind-worker-kbzwd started at <nil> (0+0 container statuses recorded) May 17 14:16:26.867: INFO: service-headless-tgqlh started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:26.867: INFO: deployment-55649fd747-f5bss started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container nginx ready: false, restart count 0 May 17 14:16:26.867: INFO: pod-terminate-status-2-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.867: INFO: service-proxy-disabled-qk4md started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:26.867: INFO: busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 started at 2022-05-17 14:15:24 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 ready: true, restart count 0 May 17 14:16:26.867: INFO: pod-b8564c48-8aa2-4bdb-a5eb-256f8afd47cc started at <nil> (0+0 container statuses recorded) May 17 14:16:26.867: INFO: concurrent-27546615-rw8md started at 2022-05-17 14:15:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container c ready: true, restart count 0 May 17 14:16:26.867: INFO: pod-qos-class-5c206c25-c993-4e12-9d89-06931c05bb47 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.867: INFO: frontend-685fc574d5-64jqm started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:26.867: INFO: service-headless-q5g89 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:26.867: INFO: service-headless-toggled-9zh79 started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:26.867: INFO: service-headless-toggled-nw4rn started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:26.867: INFO: inline-volume-tester-h6pnw started at <nil> (0+0 container statuses recorded) May 17 14:16:26.867: INFO: kube-proxy-dr9f5 started at 2022-05-17 14:13:23 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:26.867: INFO: pod-terminate-status-0-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.867: INFO: hostexec-kind-worker-gzkkg started at 2022-05-17 14:15:51 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.867: INFO: hostexec-kind-worker-28fdh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container agnhost-container ready: false, restart count 0 May 17 14:16:26.867: INFO: pod-client started at 2022-05-17 14:15:09 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container pod-client ready: true, restart count 0 May 17 14:16:26.867: INFO: liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931 started at 2022-05-17 14:14:13 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.867: INFO: concurrent-27546616-5t86w started at <nil> (0+0 container statuses recorded) May 17 14:16:26.867: INFO: kindnet-56p79 started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:26.867: INFO: csi-mockplugin-0 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.867: INFO: implicit-nonroot-uid started at <nil> (0+0 container statuses recorded) May 17 14:16:26.867: INFO: ss-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.867: INFO: affinity-nodeport-xj24q started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.867: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:26.867: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:15:44 +0000 UTC (0+7 container statuses recorded) May 17 14:16:26.867: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:26.867: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:26.868: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:26.868: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:26.868: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:26.868: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:26.868: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:28.257: INFO: Latency metrics for node kind-worker May 17 14:16:28.257: INFO: Logging node info for node kind-worker2 May 17 14:16:28.337: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 d009e97f-d73b-4ed0-8afd-9f776bcb704d 6998 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2591":"kind-worker2","csi-mock-csi-mock-volumes-8289":"csi-mock-csi-mock-volumes-8289"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-17 14:15:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:521433a178fd4494a7881394e8d644cf,SystemUUID:c86f292f-55cb-49c2-9dda-efb91a06dab9,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:28.338: INFO: Logging kubelet events for node kind-worker2 May 17 14:16:28.400: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 17 14:16:28.445: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:13:56 +0000 UTC (0+7 container statuses recorded) May 17 14:16:28.445: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:28.445: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:28.445: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:28.445: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:28.445: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:28.445: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:28.445: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:28.445: INFO: ss-0 started at 2022-05-17 14:15:25 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container webserver ready: true, restart count 0 May 17 14:16:28.445: INFO: pod-23740531-141e-4062-9ddb-69c15ab1b060 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:28.445: INFO: pvc-volume-tester-wh448 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container volume-tester ready: true, restart count 0 May 17 14:16:28.445: INFO: service-proxy-toggled-wrl7r started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:28.445: INFO: ss2-0 started at 2022-05-17 14:16:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container webserver ready: false, restart count 0 May 17 14:16:28.445: INFO: rs-vkdrm started at 2022-05-17 14:14:49 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container donothing ready: false, restart count 0 May 17 14:16:28.445: INFO: service-proxy-disabled-brdhb started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:28.445: INFO: pod-e8401689-4cef-4e20-9f2e-264844f9d704 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:28.445: INFO: service-proxy-disabled-xzkc8 started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:28.445: INFO: frontend-685fc574d5-lc6wl started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:28.445: INFO: frontend-685fc574d5-mj2wn started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:28.445: INFO: csi-mockplugin-attacher-0 started at 2022-05-17 14:13:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:28.445: INFO: kube-proxy-sdmmb started at 2022-05-17 14:13:29 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:28.445: INFO: agnhost-replica-6bcf79b489-xd4h9 started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container replica ready: true, restart count 0 May 17 14:16:28.445: INFO: pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8 started at 2022-05-17 14:16:03 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:28.445: INFO: inline-volume-tester-tx6st started at 2022-05-17 14:13:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container csi-volume-tester ready: true, restart count 0 May 17 14:16:28.445: INFO: csi-mockplugin-0 started at 2022-05-17 14:13:58 +0000 UTC (0+3 container statuses recorded) May 17 14:16:28.445: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:28.445: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:28.445: INFO: Container mock ready: true, restart count 0 May 17 14:16:28.445: INFO: service-headless-toggled-dvx6b started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:28.445: INFO: service-proxy-toggled-djh27 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:28.445: INFO: kindnet-mq8qb started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:28.445: INFO: service-headless-9mh45 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:28.445: INFO: pod-a2ba308d-816a-406c-bf57-d5840e5d5387 started at 2022-05-17 14:15:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:28.445: INFO: affinity-nodeport-f2r7w started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:28.445: INFO: hostexec-kind-worker2-kwzz9 started at 2022-05-17 14:15:35 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:28.445: INFO: hostexec-kind-worker2-gh6gn started at 2022-05-17 14:15:19 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:28.445: INFO: pod-2d828178-7e03-4331-91b0-2165aa4eeee5 started at <nil> (0+0 container statuses recorded) May 17 14:16:28.445: INFO: csi-mockplugin-0 started at 2022-05-17 14:15:19 +0000 UTC (0+4 container statuses recorded) May 17 14:16:28.445: INFO: Container busybox ready: true, restart count 0 May 17 14:16:28.445: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:28.445: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:28.445: INFO: Container mock ready: true, restart count 0 May 17 14:16:28.445: INFO: hostexec-kind-worker2-k8cx7 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:28.445: INFO: pod-server-2 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.445: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:30.659: INFO: Latency metrics for node kind-worker2 May 17 14:16:30.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-5327" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sGeneric\sEphemeral\-volume\s\(default\sfs\)\s\(late\-binding\)\]\sephemeral\sshould\ssupport\stwo\spods\swhich\sshare\sthe\ssame\svolume$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:183 May 17 14:16:22.835: waiting for pod with inline volume Unexpected error: <*errors.StatusError | 0xc000d88f00>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:321from junit_13.xml
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 17 14:15:41.007: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename ephemeral �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support two pods which share the same volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:183 May 17 14:15:41.253: INFO: Pod inline-volume-smkb4 has the following logs: May 17 14:15:41.276: INFO: Deleting pod "inline-volume-smkb4" in namespace "ephemeral-9914" May 17 14:15:41.282: INFO: Wait up to 5m0s for pod "inline-volume-smkb4" to be fully deleted �[1mSTEP�[0m: Building a driver namespace object, basename ephemeral-9914 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: deploying csi-hostpath driver May 17 14:15:43.573: INFO: creating *v1.ServiceAccount: ephemeral-9914-7292/csi-attacher May 17 14:15:43.585: INFO: creating *v1.ClusterRole: external-attacher-runner-ephemeral-9914 May 17 14:15:43.585: INFO: Define cluster role external-attacher-runner-ephemeral-9914 May 17 14:15:43.592: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-ephemeral-9914 May 17 14:15:43.639: INFO: creating *v1.Role: ephemeral-9914-7292/external-attacher-cfg-ephemeral-9914 May 17 14:15:43.650: INFO: creating *v1.RoleBinding: ephemeral-9914-7292/csi-attacher-role-cfg May 17 14:15:43.659: INFO: creating *v1.ServiceAccount: ephemeral-9914-7292/csi-provisioner May 17 14:15:43.665: INFO: creating *v1.ClusterRole: external-provisioner-runner-ephemeral-9914 May 17 14:15:43.665: INFO: Define cluster role external-provisioner-runner-ephemeral-9914 May 17 14:15:43.678: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-ephemeral-9914 May 17 14:15:43.684: INFO: creating *v1.Role: ephemeral-9914-7292/external-provisioner-cfg-ephemeral-9914 May 17 14:15:43.691: INFO: creating *v1.RoleBinding: ephemeral-9914-7292/csi-provisioner-role-cfg May 17 14:15:43.705: INFO: creating *v1.ServiceAccount: ephemeral-9914-7292/csi-snapshotter May 17 14:15:43.713: INFO: creating *v1.ClusterRole: external-snapshotter-runner-ephemeral-9914 May 17 14:15:43.713: INFO: Define cluster role external-snapshotter-runner-ephemeral-9914 May 17 14:15:43.727: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-ephemeral-9914 May 17 14:15:43.735: INFO: creating *v1.Role: ephemeral-9914-7292/external-snapshotter-leaderelection-ephemeral-9914 May 17 14:15:43.793: INFO: creating *v1.RoleBinding: ephemeral-9914-7292/external-snapshotter-leaderelection May 17 14:15:43.803: INFO: creating *v1.ServiceAccount: ephemeral-9914-7292/csi-external-health-monitor-controller May 17 14:15:43.814: INFO: creating *v1.ClusterRole: external-health-monitor-controller-runner-ephemeral-9914 May 17 14:15:43.814: INFO: Define cluster role external-health-monitor-controller-runner-ephemeral-9914 May 17 14:15:43.827: INFO: creating *v1.ClusterRoleBinding: csi-external-health-monitor-controller-role-ephemeral-9914 May 17 14:15:43.841: INFO: creating *v1.Role: ephemeral-9914-7292/external-health-monitor-controller-cfg-ephemeral-9914 May 17 14:15:43.850: INFO: creating *v1.RoleBinding: ephemeral-9914-7292/csi-external-health-monitor-controller-role-cfg May 17 14:15:43.867: INFO: creating *v1.ServiceAccount: ephemeral-9914-7292/csi-resizer May 17 14:15:43.882: INFO: creating *v1.ClusterRole: external-resizer-runner-ephemeral-9914 May 17 14:15:43.882: INFO: Define cluster role external-resizer-runner-ephemeral-9914 May 17 14:15:43.922: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-ephemeral-9914 May 17 14:15:43.934: INFO: creating *v1.Role: ephemeral-9914-7292/external-resizer-cfg-ephemeral-9914 May 17 14:15:43.949: INFO: creating *v1.RoleBinding: ephemeral-9914-7292/csi-resizer-role-cfg May 17 14:15:43.964: INFO: creating *v1.CSIDriver: csi-hostpath-ephemeral-9914 May 17 14:15:43.976: INFO: creating *v1.ServiceAccount: ephemeral-9914-7292/csi-hostpathplugin-sa May 17 14:15:43.983: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-attacher-cluster-role-ephemeral-9914 May 17 14:15:43.990: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-health-monitor-controller-cluster-role-ephemeral-9914 May 17 14:15:43.998: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-provisioner-cluster-role-ephemeral-9914 May 17 14:15:44.004: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-resizer-cluster-role-ephemeral-9914 May 17 14:15:44.017: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-snapshotter-cluster-role-ephemeral-9914 May 17 14:15:44.048: INFO: creating *v1.RoleBinding: ephemeral-9914-7292/csi-hostpathplugin-attacher-role May 17 14:15:44.054: INFO: creating *v1.RoleBinding: ephemeral-9914-7292/csi-hostpathplugin-health-monitor-controller-role May 17 14:15:44.063: INFO: creating *v1.RoleBinding: ephemeral-9914-7292/csi-hostpathplugin-provisioner-role May 17 14:15:44.088: INFO: creating *v1.RoleBinding: ephemeral-9914-7292/csi-hostpathplugin-resizer-role May 17 14:15:44.109: INFO: creating *v1.RoleBinding: ephemeral-9914-7292/csi-hostpathplugin-snapshotter-role May 17 14:15:44.127: INFO: creating *v1.StatefulSet: ephemeral-9914-7292/csi-hostpathplugin May 17 14:15:44.199: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-ephemeral-9914 May 17 14:15:44.222: INFO: Creating resource for dynamic PV May 17 14:15:44.222: INFO: Using claimSize:1Mi, test suite supported size:{ }, driver(csi-hostpath) supported size:{ } �[1mSTEP�[0m: creating a StorageClass ephemeral-9914md6rq �[1mSTEP�[0m: checking the requested inline volume exists in the pod running on node {Name:kind-worker Selector:map[] Affinity:nil} May 17 14:16:22.835: FAIL: waiting for pod with inline volume Unexpected error: <*errors.StatusError | 0xc000d88f00>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.EphemeralTest.TestEphemeral(0x79bc3e8, 0xc0027fba20, 0xc0014a31f0, 0xc00274aef0, 0xe, 0x0, 0x0, 0xc003615e00, 0xc002d16455, 0xb, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:321 +0x848 k8s.io/kubernetes/test/e2e/storage/testsuites.(*ephemeralTestSuite).DefineTests.func5() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:218 +0x1b8 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000a00a80) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000a00a80) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000a00a80, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 May 17 14:16:25.745: INFO: Pod inline-volume-tester-h6pnw has the following logs: /dev/md0 on /mnt/test-0 type ext4 (rw,relatime,discard,stripe=256) May 17 14:16:25.871: INFO: Deleting pod "inline-volume-tester-h6pnw" in namespace "ephemeral-9914" May 17 14:16:25.982: INFO: Wait up to 5m0s for pod "inline-volume-tester-h6pnw" to be fully deleted May 17 14:17:02.080: INFO: Wait up to 5m0s for pod PV pvc-63015700-11fe-481a-b89f-b7461c37f6ab to be fully deleted May 17 14:17:02.080: INFO: Waiting up to 5m0s for PersistentVolume pvc-63015700-11fe-481a-b89f-b7461c37f6ab to get deleted May 17 14:17:02.096: INFO: PersistentVolume pvc-63015700-11fe-481a-b89f-b7461c37f6ab was removed �[1mSTEP�[0m: Deleting sc �[1mSTEP�[0m: deleting the test namespace: ephemeral-9914 �[1mSTEP�[0m: Waiting for namespaces [ephemeral-9914] to vanish �[1mSTEP�[0m: uninstalling csi csi-hostpath driver May 17 14:17:18.276: INFO: deleting *v1.ServiceAccount: ephemeral-9914-7292/csi-attacher May 17 14:17:18.292: INFO: deleting *v1.ClusterRole: external-attacher-runner-ephemeral-9914 May 17 14:17:18.364: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-ephemeral-9914 May 17 14:17:18.396: INFO: deleting *v1.Role: ephemeral-9914-7292/external-attacher-cfg-ephemeral-9914 May 17 14:17:18.484: INFO: deleting *v1.RoleBinding: ephemeral-9914-7292/csi-attacher-role-cfg May 17 14:17:18.509: INFO: deleting *v1.ServiceAccount: ephemeral-9914-7292/csi-provisioner May 17 14:17:18.527: INFO: deleting *v1.ClusterRole: external-provisioner-runner-ephemeral-9914 May 17 14:17:18.551: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-ephemeral-9914 May 17 14:17:18.588: INFO: deleting *v1.Role: ephemeral-9914-7292/external-provisioner-cfg-ephemeral-9914 May 17 14:17:18.596: INFO: deleting *v1.RoleBinding: ephemeral-9914-7292/csi-provisioner-role-cfg May 17 14:17:18.605: INFO: deleting *v1.ServiceAccount: ephemeral-9914-7292/csi-snapshotter May 17 14:17:18.618: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-ephemeral-9914 May 17 14:17:18.635: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-ephemeral-9914 May 17 14:17:18.647: INFO: deleting *v1.Role: ephemeral-9914-7292/external-snapshotter-leaderelection-ephemeral-9914 May 17 14:17:18.670: INFO: deleting *v1.RoleBinding: ephemeral-9914-7292/external-snapshotter-leaderelection May 17 14:17:18.715: INFO: deleting *v1.ServiceAccount: ephemeral-9914-7292/csi-external-health-monitor-controller May 17 14:17:18.728: INFO: deleting *v1.ClusterRole: external-health-monitor-controller-runner-ephemeral-9914 May 17 14:17:18.743: INFO: deleting *v1.ClusterRoleBinding: csi-external-health-monitor-controller-role-ephemeral-9914 May 17 14:17:18.756: INFO: deleting *v1.Role: ephemeral-9914-7292/external-health-monitor-controller-cfg-ephemeral-9914 May 17 14:17:18.765: INFO: deleting *v1.RoleBinding: ephemeral-9914-7292/csi-external-health-monitor-controller-role-cfg May 17 14:17:18.784: INFO: deleting *v1.ServiceAccount: ephemeral-9914-7292/csi-resizer May 17 14:17:18.805: INFO: deleting *v1.ClusterRole: external-resizer-runner-ephemeral-9914 May 17 14:17:18.849: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-ephemeral-9914 May 17 14:17:18.875: INFO: deleting *v1.Role: ephemeral-9914-7292/external-resizer-cfg-ephemeral-9914 May 17 14:17:18.883: INFO: deleting *v1.RoleBinding: ephemeral-9914-7292/csi-resizer-role-cfg May 17 14:17:18.894: INFO: deleting *v1.CSIDriver: csi-hostpath-ephemeral-9914 May 17 14:17:18.923: INFO: deleting *v1.ServiceAccount: ephemeral-9914-7292/csi-hostpathplugin-sa May 17 14:17:18.936: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-attacher-cluster-role-ephemeral-9914 May 17 14:17:18.970: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-health-monitor-controller-cluster-role-ephemeral-9914 May 17 14:17:18.988: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-provisioner-cluster-role-ephemeral-9914 May 17 14:17:19.009: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-resizer-cluster-role-ephemeral-9914 May 17 14:17:19.037: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-snapshotter-cluster-role-ephemeral-9914 May 17 14:17:19.114: INFO: deleting *v1.RoleBinding: ephemeral-9914-7292/csi-hostpathplugin-attacher-role May 17 14:17:19.139: INFO: deleting *v1.RoleBinding: ephemeral-9914-7292/csi-hostpathplugin-health-monitor-controller-role May 17 14:17:19.155: INFO: deleting *v1.RoleBinding: ephemeral-9914-7292/csi-hostpathplugin-provisioner-role May 17 14:17:19.201: INFO: deleting *v1.RoleBinding: ephemeral-9914-7292/csi-hostpathplugin-resizer-role May 17 14:17:19.269: INFO: deleting *v1.RoleBinding: ephemeral-9914-7292/csi-hostpathplugin-snapshotter-role May 17 14:17:19.317: INFO: deleting *v1.StatefulSet: ephemeral-9914-7292/csi-hostpathplugin May 17 14:17:19.387: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-ephemeral-9914 �[1mSTEP�[0m: deleting the driver namespace: ephemeral-9914-7292 �[1mSTEP�[0m: Waiting for namespaces [ephemeral-9914-7292] to vanish [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 17 14:17:51.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
Find with mentions in log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\smock\svolume\sCSI\sFSGroupPolicy\s\[LinuxOnly\]\sshould\smodify\sfsGroup\sif\sfsGroupPolicy\=default$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583 May 17 14:16:22.825: Failed to register CSIDriver csi-mock-csi-mock-volumes-7782 Unexpected error: <*errors.errorString | 0xc003e46f10>: { s: "error waiting for CSI driver csi-mock-csi-mock-volumes-7782 registration on node kind-worker: etcdserver: request timed out", } error waiting for CSI driver csi-mock-csi-mock-volumes-7782 registration on node kind-worker: etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:198from junit_25.xml
[BeforeEach] [sig-storage] CSI mock volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 17 14:16:00.616: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename csi-mock-volumes �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should modify fsGroup if fsGroupPolicy=default /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583 �[1mSTEP�[0m: Building a driver namespace object, basename csi-mock-volumes-7782 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: deploying csi mock driver May 17 14:16:01.001: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7782-7445/csi-attacher May 17 14:16:01.036: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7782 May 17 14:16:01.036: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7782 May 17 14:16:01.057: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7782 May 17 14:16:01.064: INFO: creating *v1.Role: csi-mock-volumes-7782-7445/external-attacher-cfg-csi-mock-volumes-7782 May 17 14:16:01.070: INFO: creating *v1.RoleBinding: csi-mock-volumes-7782-7445/csi-attacher-role-cfg May 17 14:16:01.086: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7782-7445/csi-provisioner May 17 14:16:01.133: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7782 May 17 14:16:01.133: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7782 May 17 14:16:01.140: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7782 May 17 14:16:01.162: INFO: creating *v1.Role: csi-mock-volumes-7782-7445/external-provisioner-cfg-csi-mock-volumes-7782 May 17 14:16:01.171: INFO: creating *v1.RoleBinding: csi-mock-volumes-7782-7445/csi-provisioner-role-cfg May 17 14:16:01.180: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7782-7445/csi-resizer May 17 14:16:01.193: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7782 May 17 14:16:01.193: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7782 May 17 14:16:01.200: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7782 May 17 14:16:01.219: INFO: creating *v1.Role: csi-mock-volumes-7782-7445/external-resizer-cfg-csi-mock-volumes-7782 May 17 14:16:01.253: INFO: creating *v1.RoleBinding: csi-mock-volumes-7782-7445/csi-resizer-role-cfg May 17 14:16:01.264: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7782-7445/csi-snapshotter May 17 14:16:01.271: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7782 May 17 14:16:01.271: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7782 May 17 14:16:01.284: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7782 May 17 14:16:01.293: INFO: creating *v1.Role: csi-mock-volumes-7782-7445/external-snapshotter-leaderelection-csi-mock-volumes-7782 May 17 14:16:01.305: INFO: creating *v1.RoleBinding: csi-mock-volumes-7782-7445/external-snapshotter-leaderelection May 17 14:16:01.331: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7782-7445/csi-mock May 17 14:16:01.344: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7782 May 17 14:16:01.389: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7782 May 17 14:16:01.397: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7782 May 17 14:16:01.413: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7782 May 17 14:16:01.419: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7782 May 17 14:16:01.440: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7782 May 17 14:16:01.446: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7782 May 17 14:16:01.469: INFO: creating *v1.StatefulSet: csi-mock-volumes-7782-7445/csi-mockplugin May 17 14:16:01.518: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7782 May 17 14:16:01.548: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7782" May 17 14:16:01.588: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7782 to register on node kind-worker May 17 14:16:22.825: FAIL: Failed to register CSIDriver csi-mock-csi-mock-volumes-7782 Unexpected error: <*errors.errorString | 0xc003e46f10>: { s: "error waiting for CSI driver csi-mock-csi-mock-volumes-7782 registration on node kind-worker: etcdserver: request timed out", } error waiting for CSI driver csi-mock-csi-mock-volumes-7782 registration on node kind-worker: etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func1.1(0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:198 +0x61f k8s.io/kubernetes/test/e2e/storage.glob..func1.18.1() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1587 +0x175 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000b83800) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000b83800) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000b83800, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [sig-storage] CSI mock volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "csi-mock-volumes-7782". �[1mSTEP�[0m: Found 0 events. May 17 14:16:25.650: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:16:25.650: INFO: May 17 14:16:25.738: INFO: Logging node info for node kind-control-plane May 17 14:16:25.797: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 1fc1c875-88c2-4f79-b48f-f6ef32b124da 586 0 2022-05-17 14:12:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-17 14:12:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-17 14:12:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:13:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:97599c4ddadc4d3eae566d304d7d104d,SystemUUID:e786229d-3adc-40a9-a01d-f7a176ae9288,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:25.797: INFO: Logging kubelet events for node kind-control-plane May 17 14:16:25.872: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 17 14:16:26.020: INFO: coredns-78fcd69978-b6zn2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.020: INFO: Container coredns ready: true, restart count 0 May 17 14:16:26.020: INFO: local-path-provisioner-6c9449b9dd-bllk2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.020: INFO: Container local-path-provisioner ready: true, restart count 0 May 17 14:16:26.020: INFO: coredns-78fcd69978-r97ls started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.020: INFO: Container coredns ready: true, restart count 0 May 17 14:16:26.020: INFO: kube-proxy-5tzjs started at 2022-05-17 14:13:26 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.020: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:26.020: INFO: etcd-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.020: INFO: Container etcd ready: true, restart count 0 May 17 14:16:26.020: INFO: kube-apiserver-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.020: INFO: Container kube-apiserver ready: true, restart count 0 May 17 14:16:26.020: INFO: kube-controller-manager-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.020: INFO: Container kube-controller-manager ready: false, restart count 0 May 17 14:16:26.020: INFO: kube-scheduler-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.020: INFO: Container kube-scheduler ready: false, restart count 0 May 17 14:16:26.020: INFO: kindnet-s6wb7 started at 2022-05-17 14:13:07 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.020: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:26.460: INFO: Latency metrics for node kind-control-plane May 17 14:16:26.460: INFO: Logging node info for node kind-worker May 17 14:16:26.479: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5ba69ac9-ccd5-447d-9f27-9bee10b23121 6999 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9914":"kind-worker","csi-mock-csi-mock-volumes-7782":"csi-mock-csi-mock-volumes-7782"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-05-17 14:15:50 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-17 14:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7e8057f975041deb201a063c162c158,SystemUUID:fc207ccb-2e60-4380-b38c-671b606f274a,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155,DevicePath:,},},Config:nil,},} May 17 14:16:26.480: INFO: Logging kubelet events for node kind-worker May 17 14:16:26.532: INFO: Logging pods the kubelet thinks is on node kind-worker May 17 14:16:26.710: INFO: kube-proxy-dr9f5 started at 2022-05-17 14:13:23 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.710: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:26.711: INFO: pod-terminate-status-0-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.711: INFO: pod-client started at 2022-05-17 14:15:09 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.711: INFO: Container pod-client ready: true, restart count 0 May 17 14:16:26.711: INFO: liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931 started at 2022-05-17 14:14:13 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.711: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.711: INFO: concurrent-27546616-5t86w started at <nil> (0+0 container statuses recorded) May 17 14:16:26.711: INFO: hostexec-kind-worker-gzkkg started at 2022-05-17 14:15:51 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.711: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.711: INFO: hostexec-kind-worker-28fdh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.711: INFO: Container agnhost-container ready: false, restart count 0 May 17 14:16:26.711: INFO: implicit-nonroot-uid started at <nil> (0+0 container statuses recorded) May 17 14:16:26.711: INFO: ss-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.711: INFO: kindnet-56p79 started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.711: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:26.711: INFO: csi-mockplugin-0 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.711: INFO: affinity-nodeport-xj24q started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.711: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:26.711: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:15:44 +0000 UTC (0+7 container statuses recorded) May 17 14:16:26.711: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:26.711: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:26.711: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:26.711: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:26.711: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:26.711: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:26.711: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:26.711: INFO: deployment-55649fd747-6dbnz started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.711: INFO: Container nginx ready: false, restart count 0 May 17 14:16:26.711: INFO: send-events-4adc3414-703f-4b62-a42c-926d4b4b6fc2 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.711: INFO: termination-message-container5de5961b-b8bb-4868-adb0-b23802f044ae started at <nil> (0+0 container statuses recorded) May 17 14:16:26.711: INFO: service-proxy-toggled-mlq7q started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.711: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:26.711: INFO: hostexec-kind-worker-6t6pq started at 2022-05-17 14:15:52 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.711: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.711: INFO: agnhost-primary-5db8ddd565-ktltd started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.711: INFO: Container primary ready: true, restart count 0 May 17 14:16:26.711: INFO: agnhost-replica-6bcf79b489-885sh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.711: INFO: Container replica ready: true, restart count 0 May 17 14:16:26.711: INFO: pod-terminate-status-2-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.711: INFO: service-proxy-disabled-qk4md started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.712: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:26.712: INFO: busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 started at 2022-05-17 14:15:24 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.712: INFO: Container busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 ready: true, restart count 0 May 17 14:16:26.712: INFO: pod-b8564c48-8aa2-4bdb-a5eb-256f8afd47cc started at <nil> (0+0 container statuses recorded) May 17 14:16:26.712: INFO: concurrent-27546615-rw8md started at 2022-05-17 14:15:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.712: INFO: Container c ready: true, restart count 0 May 17 14:16:26.712: INFO: hostexec-kind-worker-kbzwd started at <nil> (0+0 container statuses recorded) May 17 14:16:26.712: INFO: service-headless-tgqlh started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.712: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:26.712: INFO: deployment-55649fd747-f5bss started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.712: INFO: Container nginx ready: false, restart count 0 May 17 14:16:26.712: INFO: pod-qos-class-5c206c25-c993-4e12-9d89-06931c05bb47 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.712: INFO: service-headless-toggled-9zh79 started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.712: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:26.712: INFO: service-headless-toggled-nw4rn started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.712: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:26.712: INFO: inline-volume-tester-h6pnw started at <nil> (0+0 container statuses recorded) May 17 14:16:26.712: INFO: frontend-685fc574d5-64jqm started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.712: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:26.712: INFO: service-headless-q5g89 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.712: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:28.994: INFO: Latency metrics for node kind-worker May 17 14:16:28.994: INFO: Logging node info for node kind-worker2 May 17 14:16:29.050: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 d009e97f-d73b-4ed0-8afd-9f776bcb704d 6998 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2591":"kind-worker2","csi-mock-csi-mock-volumes-8289":"csi-mock-csi-mock-volumes-8289"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-17 14:15:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:521433a178fd4494a7881394e8d644cf,SystemUUID:c86f292f-55cb-49c2-9dda-efb91a06dab9,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:29.050: INFO: Logging kubelet events for node kind-worker2 May 17 14:16:29.193: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 17 14:16:29.562: INFO: kube-proxy-sdmmb started at 2022-05-17 14:13:29 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.562: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:29.562: INFO: agnhost-replica-6bcf79b489-xd4h9 started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.562: INFO: Container replica ready: true, restart count 0 May 17 14:16:29.562: INFO: pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8 started at 2022-05-17 14:16:03 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.562: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:29.562: INFO: service-proxy-toggled-djh27 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.562: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:29.562: INFO: inline-volume-tester-tx6st started at 2022-05-17 14:13:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container csi-volume-tester ready: true, restart count 0 May 17 14:16:29.563: INFO: csi-mockplugin-0 started at 2022-05-17 14:13:58 +0000 UTC (0+3 container statuses recorded) May 17 14:16:29.563: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:29.563: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:29.563: INFO: Container mock ready: true, restart count 0 May 17 14:16:29.563: INFO: service-headless-toggled-dvx6b started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:29.563: INFO: hostexec-kind-worker2-kwzz9 started at 2022-05-17 14:15:35 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.563: INFO: kindnet-mq8qb started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:29.563: INFO: service-headless-9mh45 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:29.563: INFO: pod-a2ba308d-816a-406c-bf57-d5840e5d5387 started at 2022-05-17 14:15:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:29.563: INFO: affinity-nodeport-f2r7w started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:29.563: INFO: hostexec-kind-worker2-gh6gn started at 2022-05-17 14:15:19 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.563: INFO: pod-2d828178-7e03-4331-91b0-2165aa4eeee5 started at <nil> (0+0 container statuses recorded) May 17 14:16:29.563: INFO: csi-mockplugin-0 started at 2022-05-17 14:15:19 +0000 UTC (0+4 container statuses recorded) May 17 14:16:29.563: INFO: Container busybox ready: true, restart count 0 May 17 14:16:29.563: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:29.563: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:29.563: INFO: Container mock ready: true, restart count 0 May 17 14:16:29.563: INFO: hostexec-kind-worker2-k8cx7 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.563: INFO: pod-server-2 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.563: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:13:56 +0000 UTC (0+7 container statuses recorded) May 17 14:16:29.563: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:29.563: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:29.563: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:29.563: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:29.563: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:29.563: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:29.563: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:29.563: INFO: ss-0 started at 2022-05-17 14:15:25 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container webserver ready: true, restart count 0 May 17 14:16:29.563: INFO: pod-23740531-141e-4062-9ddb-69c15ab1b060 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:29.563: INFO: pvc-volume-tester-wh448 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container volume-tester ready: true, restart count 0 May 17 14:16:29.563: INFO: service-proxy-toggled-wrl7r started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:29.563: INFO: ss2-0 started at 2022-05-17 14:16:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container webserver ready: false, restart count 0 May 17 14:16:29.563: INFO: rs-vkdrm started at 2022-05-17 14:14:49 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container donothing ready: false, restart count 0 May 17 14:16:29.563: INFO: service-proxy-disabled-brdhb started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:29.563: INFO: service-proxy-disabled-xzkc8 started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:29.563: INFO: frontend-685fc574d5-lc6wl started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:29.563: INFO: frontend-685fc574d5-mj2wn started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:29.563: INFO: csi-mockplugin-attacher-0 started at 2022-05-17 14:13:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.563: INFO: Container csi-attacher ready: false, restart count 0 May 17 14:16:30.986: INFO: Latency metrics for node kind-worker2 �[1mSTEP�[0m: Collecting events from namespace "csi-mock-volumes-7782-7445". �[1mSTEP�[0m: Found 11 events. May 17 14:16:31.021: INFO: At 2022-05-17 14:16:01 +0000 UTC - event for csi-mockplugin: {statefulset-controller } SuccessfulCreate: create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful May 17 14:16:31.021: INFO: At 2022-05-17 14:16:01 +0000 UTC - event for csi-mockplugin-0: {default-scheduler } Scheduled: Successfully assigned csi-mock-volumes-7782-7445/csi-mockplugin-0 to kind-worker May 17 14:16:31.021: INFO: At 2022-05-17 14:16:03 +0000 UTC - event for csi-mockplugin-0: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0" already present on machine May 17 14:16:31.021: INFO: At 2022-05-17 14:16:03 +0000 UTC - event for csi-mockplugin-0: {kubelet kind-worker} Created: Created container csi-provisioner May 17 14:16:31.021: INFO: At 2022-05-17 14:16:04 +0000 UTC - event for csi-mockplugin-0: {kubelet kind-worker} Started: Started container csi-provisioner May 17 14:16:31.021: INFO: At 2022-05-17 14:16:04 +0000 UTC - event for csi-mockplugin-0: {kubelet kind-worker} Created: Created container driver-registrar May 17 14:16:31.021: INFO: At 2022-05-17 14:16:04 +0000 UTC - event for csi-mockplugin-0: {kubelet kind-worker} Started: Started container driver-registrar May 17 14:16:31.021: INFO: At 2022-05-17 14:16:04 +0000 UTC - event for csi-mockplugin-0: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/sig-storage/mock-driver:v4.1.0" already present on machine May 17 14:16:31.021: INFO: At 2022-05-17 14:16:04 +0000 UTC - event for csi-mockplugin-0: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0" already present on machine May 17 14:16:31.021: INFO: At 2022-05-17 14:16:05 +0000 UTC - event for csi-mockplugin-0: {kubelet kind-worker} Created: Created container mock May 17 14:16:31.021: INFO: At 2022-05-17 14:16:05 +0000 UTC - event for csi-mockplugin-0: {kubelet kind-worker} Started: Started container mock May 17 14:16:31.099: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:16:31.099: INFO: csi-mockplugin-0 kind-worker Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:16:01 +0000 UTC }] May 17 14:16:31.100: INFO: May 17 14:16:31.118: INFO: Logging node info for node kind-control-plane May 17 14:16:31.186: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 1fc1c875-88c2-4f79-b48f-f6ef32b124da 586 0 2022-05-17 14:12:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-17 14:12:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-17 14:12:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:13:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:97599c4ddadc4d3eae566d304d7d104d,SystemUUID:e786229d-3adc-40a9-a01d-f7a176ae9288,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:31.186: INFO: Logging kubelet events for node kind-control-plane May 17 14:16:31.225: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 17 14:16:31.239: INFO: etcd-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.239: INFO: Container etcd ready: true, restart count 0 May 17 14:16:31.239: INFO: kube-apiserver-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.239: INFO: Container kube-apiserver ready: true, restart count 0 May 17 14:16:31.239: INFO: kube-controller-manager-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.239: INFO: Container kube-controller-manager ready: false, restart count 1 May 17 14:16:31.239: INFO: kube-scheduler-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.239: INFO: Container kube-scheduler ready: false, restart count 1 May 17 14:16:31.239: INFO: kindnet-s6wb7 started at 2022-05-17 14:13:07 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.239: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:31.239: INFO: coredns-78fcd69978-b6zn2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.239: INFO: Container coredns ready: true, restart count 0 May 17 14:16:31.239: INFO: local-path-provisioner-6c9449b9dd-bllk2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.239: INFO: Container local-path-provisioner ready: true, restart count 0 May 17 14:16:31.239: INFO: coredns-78fcd69978-r97ls started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.239: INFO: Container coredns ready: true, restart count 0 May 17 14:16:31.239: INFO: kube-proxy-5tzjs started at 2022-05-17 14:13:26 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.239: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:31.301: INFO: Latency metrics for node kind-control-plane May 17 14:16:31.301: INFO: Logging node info for node kind-worker May 17 14:16:31.316: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5ba69ac9-ccd5-447d-9f27-9bee10b23121 6999 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9914":"kind-worker","csi-mock-csi-mock-volumes-7782":"csi-mock-csi-mock-volumes-7782"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-05-17 14:15:50 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-17 14:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7e8057f975041deb201a063c162c158,SystemUUID:fc207ccb-2e60-4380-b38c-671b606f274a,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155,DevicePath:,},},Config:nil,},} May 17 14:16:31.316: INFO: Logging kubelet events for node kind-worker May 17 14:16:31.322: INFO: Logging pods the kubelet thinks is on node kind-worker May 17 14:16:31.345: INFO: send-events-4adc3414-703f-4b62-a42c-926d4b4b6fc2 started at <nil> (0+0 container statuses recorded) May 17 14:16:31.345: INFO: service-proxy-toggled-mlq7q started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:31.345: INFO: hostexec-kind-worker-6t6pq started at 2022-05-17 14:15:52 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:31.345: INFO: agnhost-primary-5db8ddd565-ktltd started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container primary ready: true, restart count 0 May 17 14:16:31.345: INFO: agnhost-replica-6bcf79b489-885sh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container replica ready: true, restart count 0 May 17 14:16:31.345: INFO: service-proxy-disabled-qk4md started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:31.345: INFO: pod-b8564c48-8aa2-4bdb-a5eb-256f8afd47cc started at <nil> (0+0 container statuses recorded) May 17 14:16:31.345: INFO: concurrent-27546615-rw8md started at 2022-05-17 14:15:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container c ready: true, restart count 0 May 17 14:16:31.345: INFO: hostexec-kind-worker-kbzwd started at 2022-05-17 14:16:05 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:31.345: INFO: service-headless-tgqlh started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:31.345: INFO: deployment-55649fd747-f5bss started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container nginx ready: false, restart count 0 May 17 14:16:31.345: INFO: pod-qos-class-5c206c25-c993-4e12-9d89-06931c05bb47 started at 2022-05-17 14:16:07 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container agnhost ready: false, restart count 0 May 17 14:16:31.345: INFO: service-headless-toggled-9zh79 started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:31.345: INFO: service-headless-toggled-nw4rn started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:31.345: INFO: inline-volume-tester-h6pnw started at <nil> (0+0 container statuses recorded) May 17 14:16:31.345: INFO: frontend-685fc574d5-64jqm started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:31.345: INFO: service-headless-q5g89 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:31.345: INFO: kube-proxy-dr9f5 started at 2022-05-17 14:13:23 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:31.345: INFO: pod-terminate-status-0-1 started at 2022-05-17 14:16:07 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container fail ready: false, restart count 0 May 17 14:16:31.345: INFO: pod-client started at 2022-05-17 14:15:09 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container pod-client ready: true, restart count 0 May 17 14:16:31.345: INFO: liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931 started at 2022-05-17 14:14:13 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:31.345: INFO: concurrent-27546616-5t86w started at 2022-05-17 14:16:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container c ready: false, restart count 0 May 17 14:16:31.345: INFO: hostexec-kind-worker-gzkkg started at 2022-05-17 14:15:51 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:31.345: INFO: hostexec-kind-worker-28fdh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container agnhost-container ready: false, restart count 0 May 17 14:16:31.345: INFO: implicit-nonroot-uid started at 2022-05-17 14:16:05 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container implicit-nonroot-uid ready: false, restart count 0 May 17 14:16:31.345: INFO: ss-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:31.345: INFO: kindnet-56p79 started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:31.345: INFO: csi-mockplugin-0 started at <nil> (0+0 container statuses recorded) May 17 14:16:31.345: INFO: affinity-nodeport-xj24q started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:31.345: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:15:44 +0000 UTC (0+7 container statuses recorded) May 17 14:16:31.345: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:31.345: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:31.345: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:31.345: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:31.345: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:31.345: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:31.345: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:31.345: INFO: deployment-55649fd747-6dbnz started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.345: INFO: Container nginx ready: false, restart count 0 May 17 14:16:31.606: INFO: Latency metrics for node kind-worker May 17 14:16:31.606: INFO: Logging node info for node kind-worker2 May 17 14:16:31.609: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 d009e97f-d73b-4ed0-8afd-9f776bcb704d 6998 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2591":"kind-worker2","csi-mock-csi-mock-volumes-8289":"csi-mock-csi-mock-volumes-8289"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-17 14:15:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:521433a178fd4494a7881394e8d644cf,SystemUUID:c86f292f-55cb-49c2-9dda-efb91a06dab9,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:31.610: INFO: Logging kubelet events for node kind-worker2 May 17 14:16:31.614: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 17 14:16:31.631: INFO: pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8 started at 2022-05-17 14:16:03 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.631: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:31.631: INFO: kube-proxy-sdmmb started at 2022-05-17 14:13:29 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.631: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:31.631: INFO: agnhost-replica-6bcf79b489-xd4h9 started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.631: INFO: Container replica ready: true, restart count 0 May 17 14:16:31.631: INFO: service-headless-toggled-dvx6b started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.631: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:31.631: INFO: service-proxy-toggled-djh27 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.631: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:31.631: INFO: inline-volume-tester-tx6st started at 2022-05-17 14:13:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.631: INFO: Container csi-volume-tester ready: true, restart count 0 May 17 14:16:31.631: INFO: hostexec-kind-worker2-kwzz9 started at 2022-05-17 14:15:35 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.631: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:31.631: INFO: kindnet-mq8qb started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.631: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:31.631: INFO: service-headless-9mh45 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.631: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:31.631: INFO: pod-a2ba308d-816a-406c-bf57-d5840e5d5387 started at 2022-05-17 14:15:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.631: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:31.631: INFO: hostexec-kind-worker2-k8cx7 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.631: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:31.631: INFO: pod-server-2 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.631: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:31.631: INFO: hostexec-kind-worker2-gh6gn started at 2022-05-17 14:15:19 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.631: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:31.631: INFO: pod-2d828178-7e03-4331-91b0-2165aa4eeee5 started at <nil> (0+0 container statuses recorded) May 17 14:16:31.631: INFO: csi-mockplugin-0 started at 2022-05-17 14:15:19 +0000 UTC (0+4 container statuses recorded) May 17 14:16:31.631: INFO: Container busybox ready: true, restart count 0 May 17 14:16:31.631: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:31.631: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:31.631: INFO: Container mock ready: true, restart count 0 May 17 14:16:31.632: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:13:56 +0000 UTC (0+7 container statuses recorded) May 17 14:16:31.632: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:31.632: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:31.632: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:31.632: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:31.632: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:31.632: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:31.632: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:31.632: INFO: ss-0 started at 2022-05-17 14:15:25 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.632: INFO: Container webserver ready: true, restart count 0 May 17 14:16:31.632: INFO: pod-23740531-141e-4062-9ddb-69c15ab1b060 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.632: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:31.632: INFO: ss2-0 started at 2022-05-17 14:16:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.632: INFO: Container webserver ready: false, restart count 0 May 17 14:16:31.632: INFO: rs-vkdrm started at 2022-05-17 14:14:49 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.632: INFO: Container donothing ready: false, restart count 0 May 17 14:16:31.632: INFO: pvc-volume-tester-wh448 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.632: INFO: Container volume-tester ready: true, restart count 0 May 17 14:16:31.632: INFO: service-proxy-toggled-wrl7r started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.632: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:31.632: INFO: frontend-685fc574d5-lc6wl started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.632: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:31.632: INFO: frontend-685fc574d5-mj2wn started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.632: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:31.632: INFO: service-proxy-disabled-brdhb started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.632: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:31.632: INFO: service-proxy-disabled-xzkc8 started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:31.632: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:31.818: INFO: Latency metrics for node kind-worker2 May 17 14:16:31.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "csi-mock-volumes-7782" for this suite. �[1mSTEP�[0m: Destroying namespace "csi-mock-volumes-7782-7445" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\smock\svolume\sstorage\scapacity\sexhausted\,\slate\sbinding\,\sno\stopology$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081 May 17 14:16:22.840: failed to delete pod Unexpected error: <*errors.errorString | 0xc00354d550>: { s: "pod \"pvc-volume-tester-wh448\" was not deleted: etcdserver: request timed out", } pod "pvc-volume-tester-wh448" was not deleted: etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1136from junit_08.xml
[BeforeEach] [sig-storage] CSI mock volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 17 14:15:18.537: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename csi-mock-volumes �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] exhausted, late binding, no topology /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081 �[1mSTEP�[0m: Building a driver namespace object, basename csi-mock-volumes-8289 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: deploying csi mock proxy May 17 14:15:18.852: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8289-5069/csi-attacher May 17 14:15:18.871: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8289 May 17 14:15:18.871: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8289 May 17 14:15:18.886: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8289 May 17 14:15:18.899: INFO: creating *v1.Role: csi-mock-volumes-8289-5069/external-attacher-cfg-csi-mock-volumes-8289 May 17 14:15:18.924: INFO: creating *v1.RoleBinding: csi-mock-volumes-8289-5069/csi-attacher-role-cfg May 17 14:15:18.979: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8289-5069/csi-provisioner May 17 14:15:19.004: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8289 May 17 14:15:19.004: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8289 May 17 14:15:19.020: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8289 May 17 14:15:19.029: INFO: creating *v1.Role: csi-mock-volumes-8289-5069/external-provisioner-cfg-csi-mock-volumes-8289 May 17 14:15:19.041: INFO: creating *v1.RoleBinding: csi-mock-volumes-8289-5069/csi-provisioner-role-cfg May 17 14:15:19.049: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8289-5069/csi-resizer May 17 14:15:19.064: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8289 May 17 14:15:19.064: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8289 May 17 14:15:19.097: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8289 May 17 14:15:19.115: INFO: creating *v1.Role: csi-mock-volumes-8289-5069/external-resizer-cfg-csi-mock-volumes-8289 May 17 14:15:19.141: INFO: creating *v1.RoleBinding: csi-mock-volumes-8289-5069/csi-resizer-role-cfg May 17 14:15:19.153: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8289-5069/csi-snapshotter May 17 14:15:19.170: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8289 May 17 14:15:19.170: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8289 May 17 14:15:19.185: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8289 May 17 14:15:19.233: INFO: creating *v1.Role: csi-mock-volumes-8289-5069/external-snapshotter-leaderelection-csi-mock-volumes-8289 May 17 14:15:19.244: INFO: creating *v1.RoleBinding: csi-mock-volumes-8289-5069/external-snapshotter-leaderelection May 17 14:15:19.265: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8289-5069/csi-mock May 17 14:15:19.277: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8289 May 17 14:15:19.299: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8289 May 17 14:15:19.317: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8289 May 17 14:15:19.360: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8289 May 17 14:15:19.373: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8289 May 17 14:15:19.385: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8289 May 17 14:15:19.403: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8289 May 17 14:15:19.422: INFO: creating *v1.StatefulSet: csi-mock-volumes-8289-5069/csi-mockplugin May 17 14:15:19.450: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8289 May 17 14:15:19.509: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8289" May 17 14:15:19.532: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8289 to register on node kind-worker2 I0517 14:15:38.478800 86493 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8289","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0517 14:15:38.607117 86493 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0517 14:15:38.610688 86493 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8289","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0517 14:15:38.614741 86493 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0517 14:15:38.617375 86493 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0517 14:15:39.065474 86493 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-8289"},"Error":"","FullError":null} �[1mSTEP�[0m: Creating pod May 17 14:15:46.008: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0517 14:15:46.105125 86493 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I0517 14:15:46.140197 86493 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973"}}},"Error":"","FullError":null} I0517 14:15:48.300040 86493 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0517 14:15:48.320355 86493 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} May 17 14:15:48.329: INFO: >>> kubeConfig: /root/.kube/kind-test-config I0517 14:15:48.463278 86493 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973","storage.kubernetes.io/csiProvisionerIdentity":"1652796938618-8081-csi-mock-csi-mock-volumes-8289"}},"Response":{},"Error":"","FullError":null} I0517 14:15:49.380495 86493 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0517 14:15:49.384560 86493 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} May 17 14:15:49.387: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 17 14:15:49.594: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 17 14:15:49.728: INFO: >>> kubeConfig: /root/.kube/kind-test-config I0517 14:15:49.883650 86493 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973/globalmount","target_path":"/var/lib/kubelet/pods/a6a91909-c261-407b-bab1-9ddf53d7af82/volumes/kubernetes.io~csi/pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973","storage.kubernetes.io/csiProvisionerIdentity":"1652796938618-8081-csi-mock-csi-mock-volumes-8289"}},"Response":{},"Error":"","FullError":null} May 17 14:16:02.074: INFO: Deleting pod "pvc-volume-tester-wh448" in namespace "csi-mock-volumes-8289" May 17 14:16:02.088: INFO: Wait up to 5m0s for pod "pvc-volume-tester-wh448" to be fully deleted May 17 14:16:02.555: INFO: >>> kubeConfig: /root/.kube/kind-test-config I0517 14:16:02.738993 86493 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/a6a91909-c261-407b-bab1-9ddf53d7af82/volumes/kubernetes.io~csi/pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973/mount"},"Response":{},"Error":"","FullError":null} I0517 14:16:02.843015 86493 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0517 14:16:02.848688 86493 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973/globalmount"},"Response":{},"Error":"","FullError":null} May 17 14:16:22.840: FAIL: failed to delete pod Unexpected error: <*errors.errorString | 0xc00354d550>: { s: "pod \"pvc-volume-tester-wh448\" was not deleted: etcdserver: request timed out", } pod "pvc-volume-tester-wh448" was not deleted: etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func1.14.1() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1136 +0x785 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000bd4780) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000bd4780) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000bd4780, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 �[1mSTEP�[0m: Deleting pod pvc-volume-tester-wh448 May 17 14:16:22.841: INFO: Deleting pod "pvc-volume-tester-wh448" in namespace "csi-mock-volumes-8289" May 17 14:16:25.521: INFO: Wait up to 5m0s for pod "pvc-volume-tester-wh448" to be fully deleted �[1mSTEP�[0m: Deleting claim pvc-lj5sq May 17 14:16:33.603: INFO: Waiting up to 2m0s for PersistentVolume pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973 to get deleted May 17 14:16:33.627: INFO: PersistentVolume pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973 found and phase=Bound (24.487125ms) May 17 14:16:35.632: INFO: PersistentVolume pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973 found and phase=Bound (2.029136271s) May 17 14:16:37.637: INFO: PersistentVolume pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973 found and phase=Bound (4.033799428s) May 17 14:16:39.642: INFO: PersistentVolume pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973 found and phase=Bound (6.038612544s) May 17 14:16:41.646: INFO: PersistentVolume pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973 found and phase=Bound (8.043188926s) May 17 14:16:43.650: INFO: PersistentVolume pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973 found and phase=Bound (10.04710384s) May 17 14:16:45.654: INFO: PersistentVolume pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973 found and phase=Bound (12.051330628s) May 17 14:16:47.658: INFO: PersistentVolume pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973 found and phase=Bound (14.055381954s) May 17 14:16:49.663: INFO: PersistentVolume pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973 found and phase=Bound (16.060080192s) May 17 14:16:51.668: INFO: PersistentVolume pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973 found and phase=Bound (18.064728777s) May 17 14:16:53.671: INFO: PersistentVolume pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973 found and phase=Bound (20.06822182s) May 17 14:16:55.675: INFO: PersistentVolume pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973 found and phase=Bound (22.071520878s) May 17 14:16:57.679: INFO: PersistentVolume pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973 found and phase=Bound (24.075912405s) I0517 14:16:59.157492 86493 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} May 17 14:16:59.751: INFO: PersistentVolume pvc-0eaf9ef8-b1fb-4501-a1bf-50dca72e0973 was removed �[1mSTEP�[0m: Deleting storageclass csi-mock-volumes-8289-scpgphs �[1mSTEP�[0m: Cleaning up resources �[1mSTEP�[0m: deleting the test namespace: csi-mock-volumes-8289 �[1mSTEP�[0m: Waiting for namespaces [csi-mock-volumes-8289] to vanish �[1mSTEP�[0m: uninstalling csi mock driver May 17 14:17:21.031: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8289-5069/csi-attacher May 17 14:17:21.088: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8289 May 17 14:17:21.158: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8289 May 17 14:17:21.262: INFO: deleting *v1.Role: csi-mock-volumes-8289-5069/external-attacher-cfg-csi-mock-volumes-8289 May 17 14:17:21.303: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8289-5069/csi-attacher-role-cfg May 17 14:17:21.363: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8289-5069/csi-provisioner May 17 14:17:21.397: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8289 May 17 14:17:21.428: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8289 May 17 14:17:21.445: INFO: deleting *v1.Role: csi-mock-volumes-8289-5069/external-provisioner-cfg-csi-mock-volumes-8289 May 17 14:17:21.496: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8289-5069/csi-provisioner-role-cfg May 17 14:17:21.514: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8289-5069/csi-resizer May 17 14:17:21.535: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8289 May 17 14:17:21.550: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8289 May 17 14:17:21.580: INFO: deleting *v1.Role: csi-mock-volumes-8289-5069/external-resizer-cfg-csi-mock-volumes-8289 May 17 14:17:21.648: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8289-5069/csi-resizer-role-cfg May 17 14:17:21.670: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8289-5069/csi-snapshotter May 17 14:17:21.719: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8289 May 17 14:17:21.796: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8289 May 17 14:17:21.832: INFO: deleting *v1.Role: csi-mock-volumes-8289-5069/external-snapshotter-leaderelection-csi-mock-volumes-8289 May 17 14:17:21.923: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8289-5069/external-snapshotter-leaderelection May 17 14:17:21.949: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8289-5069/csi-mock May 17 14:17:21.971: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8289 May 17 14:17:21.984: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8289 May 17 14:17:22.019: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8289 May 17 14:17:22.055: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8289 May 17 14:17:22.075: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8289 May 17 14:17:22.091: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8289 May 17 14:17:22.114: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8289 May 17 14:17:22.135: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8289-5069/csi-mockplugin May 17 14:17:22.165: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8289 �[1mSTEP�[0m: deleting the driver namespace: csi-mock-volumes-8289-5069 �[1mSTEP�[0m: Waiting for namespaces [csi-mock-volumes-8289-5069] to vanish [AfterEach] [sig-storage] CSI mock volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 17 14:18:10.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
Find pvc-volume-tester-wh448 mentions in log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\-bindmounted\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\sbe\sable\sto\sunmount\safter\sthe\ssubpath\sdirectory\sis\sdeleted\s\[LinuxOnly\]$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445 May 17 14:16:22.843: Unexpected error: <*errors.StatusError | 0xc001922f00>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:110from junit_10.xml
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 17 14:15:50.864: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename provisioning �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to unmount after the subpath directory is deleted [LinuxOnly] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445 May 17 14:15:51.185: INFO: In-tree plugin kubernetes.io/local-volume is not migrated, not validating any metrics May 17 14:16:22.843: FAIL: Unexpected error: <*errors.StatusError | 0xc001922f00>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).launchNodeExecPod(0xc00060c410, 0xc002632555, 0xb, 0xb) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:110 +0x4b9 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).exec(0xc00060c410, 0xc003df7bc0, 0xba, 0xc002666000, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:136 +0x3b5 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommandWithResult(0xc00060c410, 0xc003df7bc0, 0xba, 0xc002666000, 0x3, 0xba, 0xc003df7bc0, 0xc00366a000) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:169 +0x99 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommand(0xc00060c410, 0xc003df7bc0, 0xba, 0xc002666000, 0x3, 0xc003df7bc0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:178 +0x49 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).setupLocalVolumeDirectoryBindMounted(0xc0025b1980, 0xc002666000, 0x0, 0x203000) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:237 +0x14b k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Create(0xc0025b1980, 0xc002666000, 0x7094dfe, 0xf, 0x0, 0x68519e0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:300 +0x47b k8s.io/kubernetes/test/e2e/storage/drivers.(*localDriver).CreateVolume(0xc0025b8400, 0xc00247f8c0, 0x7098411, 0x10, 0xc0025b8400, 0x6d57d01) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1941 +0x144 k8s.io/kubernetes/test/e2e/storage/framework.CreateVolume(0x7911668, 0xc0025b8400, 0xc00247f8c0, 0x7098411, 0x10, 0xc00435c600, 0x2199bc5) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/driver_operations.go:43 +0x222 k8s.io/kubernetes/test/e2e/storage/framework.CreateVolumeResource(0x7911668, 0xc0025b8400, 0xc00247f8c0, 0x70f7855, 0x1f, 0x0, 0x0, 0x7098411, 0x10, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/volume_resource.go:65 +0x1e5 k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func1() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:127 +0x2c5 k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func20() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:446 +0x7d k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000325980) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000325980) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000325980, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "provisioning-803". �[1mSTEP�[0m: Found 4 events. May 17 14:16:25.592: INFO: At 2022-05-17 14:15:51 +0000 UTC - event for hostexec-kind-worker-gzkkg: {default-scheduler } Scheduled: Successfully assigned provisioning-803/hostexec-kind-worker-gzkkg to kind-worker May 17 14:16:25.592: INFO: At 2022-05-17 14:15:52 +0000 UTC - event for hostexec-kind-worker-gzkkg: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 17 14:16:25.592: INFO: At 2022-05-17 14:15:52 +0000 UTC - event for hostexec-kind-worker-gzkkg: {kubelet kind-worker} Created: Created container agnhost-container May 17 14:16:25.592: INFO: At 2022-05-17 14:15:52 +0000 UTC - event for hostexec-kind-worker-gzkkg: {kubelet kind-worker} Started: Started container agnhost-container May 17 14:16:25.659: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:16:25.659: INFO: hostexec-kind-worker-gzkkg kind-worker Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:51 +0000 UTC }] May 17 14:16:25.659: INFO: May 17 14:16:25.748: INFO: Logging node info for node kind-control-plane May 17 14:16:25.812: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 1fc1c875-88c2-4f79-b48f-f6ef32b124da 586 0 2022-05-17 14:12:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-17 14:12:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-17 14:12:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:13:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:97599c4ddadc4d3eae566d304d7d104d,SystemUUID:e786229d-3adc-40a9-a01d-f7a176ae9288,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:25.813: INFO: Logging kubelet events for node kind-control-plane May 17 14:16:25.879: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 17 14:16:26.022: INFO: etcd-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.022: INFO: Container etcd ready: true, restart count 0 May 17 14:16:26.022: INFO: kube-apiserver-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.022: INFO: Container kube-apiserver ready: true, restart count 0 May 17 14:16:26.022: INFO: kube-controller-manager-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.022: INFO: Container kube-controller-manager ready: false, restart count 0 May 17 14:16:26.022: INFO: kube-scheduler-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.022: INFO: Container kube-scheduler ready: false, restart count 0 May 17 14:16:26.022: INFO: kindnet-s6wb7 started at 2022-05-17 14:13:07 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.022: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:26.022: INFO: coredns-78fcd69978-b6zn2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.022: INFO: Container coredns ready: true, restart count 0 May 17 14:16:26.022: INFO: local-path-provisioner-6c9449b9dd-bllk2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.022: INFO: Container local-path-provisioner ready: true, restart count 0 May 17 14:16:26.022: INFO: coredns-78fcd69978-r97ls started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.022: INFO: Container coredns ready: true, restart count 0 May 17 14:16:26.022: INFO: kube-proxy-5tzjs started at 2022-05-17 14:13:26 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.022: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:26.465: INFO: Latency metrics for node kind-control-plane May 17 14:16:26.465: INFO: Logging node info for node kind-worker May 17 14:16:26.546: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5ba69ac9-ccd5-447d-9f27-9bee10b23121 6999 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9914":"kind-worker","csi-mock-csi-mock-volumes-7782":"csi-mock-csi-mock-volumes-7782"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-05-17 14:15:50 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-17 14:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7e8057f975041deb201a063c162c158,SystemUUID:fc207ccb-2e60-4380-b38c-671b606f274a,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155,DevicePath:,},},Config:nil,},} May 17 14:16:26.546: INFO: Logging kubelet events for node kind-worker May 17 14:16:26.647: INFO: Logging pods the kubelet thinks is on node kind-worker May 17 14:16:26.788: INFO: hostexec-kind-worker-kbzwd started at <nil> (0+0 container statuses recorded) May 17 14:16:26.788: INFO: service-headless-tgqlh started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.788: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:26.788: INFO: deployment-55649fd747-f5bss started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.788: INFO: Container nginx ready: false, restart count 0 May 17 14:16:26.788: INFO: pod-terminate-status-2-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.788: INFO: service-proxy-disabled-qk4md started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.788: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:26.788: INFO: busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 started at 2022-05-17 14:15:24 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.788: INFO: Container busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 ready: true, restart count 0 May 17 14:16:26.788: INFO: pod-b8564c48-8aa2-4bdb-a5eb-256f8afd47cc started at <nil> (0+0 container statuses recorded) May 17 14:16:26.788: INFO: concurrent-27546615-rw8md started at 2022-05-17 14:15:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.788: INFO: Container c ready: true, restart count 0 May 17 14:16:26.788: INFO: pod-qos-class-5c206c25-c993-4e12-9d89-06931c05bb47 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.788: INFO: frontend-685fc574d5-64jqm started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.788: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:26.788: INFO: service-headless-q5g89 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.788: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:26.788: INFO: service-headless-toggled-9zh79 started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.788: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:26.788: INFO: service-headless-toggled-nw4rn started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.788: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:26.788: INFO: inline-volume-tester-h6pnw started at <nil> (0+0 container statuses recorded) May 17 14:16:26.788: INFO: kube-proxy-dr9f5 started at 2022-05-17 14:13:23 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.788: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:26.788: INFO: pod-terminate-status-0-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.788: INFO: hostexec-kind-worker-gzkkg started at 2022-05-17 14:15:51 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.788: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.788: INFO: hostexec-kind-worker-28fdh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.788: INFO: Container agnhost-container ready: false, restart count 0 May 17 14:16:26.788: INFO: pod-client started at 2022-05-17 14:15:09 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.788: INFO: Container pod-client ready: true, restart count 0 May 17 14:16:26.788: INFO: liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931 started at 2022-05-17 14:14:13 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.788: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.788: INFO: concurrent-27546616-5t86w started at <nil> (0+0 container statuses recorded) May 17 14:16:26.788: INFO: kindnet-56p79 started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.788: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:26.788: INFO: csi-mockplugin-0 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.789: INFO: implicit-nonroot-uid started at <nil> (0+0 container statuses recorded) May 17 14:16:26.789: INFO: ss-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.789: INFO: affinity-nodeport-xj24q started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.789: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:26.789: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:15:44 +0000 UTC (0+7 container statuses recorded) May 17 14:16:26.789: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:26.789: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:26.789: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:26.789: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:26.789: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:26.789: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:26.789: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:26.789: INFO: deployment-55649fd747-6dbnz started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.789: INFO: Container nginx ready: false, restart count 0 May 17 14:16:26.789: INFO: agnhost-primary-5db8ddd565-ktltd started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.789: INFO: Container primary ready: true, restart count 0 May 17 14:16:26.789: INFO: send-events-4adc3414-703f-4b62-a42c-926d4b4b6fc2 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.789: INFO: termination-message-container5de5961b-b8bb-4868-adb0-b23802f044ae started at <nil> (0+0 container statuses recorded) May 17 14:16:26.789: INFO: service-proxy-toggled-mlq7q started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.789: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:26.789: INFO: hostexec-kind-worker-6t6pq started at 2022-05-17 14:15:52 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.789: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.789: INFO: agnhost-replica-6bcf79b489-885sh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.789: INFO: Container replica ready: true, restart count 0 May 17 14:16:27.927: INFO: Latency metrics for node kind-worker May 17 14:16:27.927: INFO: Logging node info for node kind-worker2 May 17 14:16:27.944: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 d009e97f-d73b-4ed0-8afd-9f776bcb704d 6998 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2591":"kind-worker2","csi-mock-csi-mock-volumes-8289":"csi-mock-csi-mock-volumes-8289"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-17 14:15:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:521433a178fd4494a7881394e8d644cf,SystemUUID:c86f292f-55cb-49c2-9dda-efb91a06dab9,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:27.945: INFO: Logging kubelet events for node kind-worker2 May 17 14:16:28.064: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 17 14:16:28.167: INFO: service-proxy-disabled-brdhb started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:28.167: INFO: pod-e8401689-4cef-4e20-9f2e-264844f9d704 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:28.167: INFO: service-proxy-disabled-xzkc8 started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:28.167: INFO: frontend-685fc574d5-lc6wl started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:28.167: INFO: frontend-685fc574d5-mj2wn started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:28.167: INFO: csi-mockplugin-attacher-0 started at 2022-05-17 14:13:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:28.167: INFO: kube-proxy-sdmmb started at 2022-05-17 14:13:29 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:28.167: INFO: agnhost-replica-6bcf79b489-xd4h9 started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container replica ready: true, restart count 0 May 17 14:16:28.167: INFO: pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8 started at 2022-05-17 14:16:03 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:28.167: INFO: inline-volume-tester-tx6st started at 2022-05-17 14:13:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container csi-volume-tester ready: true, restart count 0 May 17 14:16:28.167: INFO: csi-mockplugin-0 started at 2022-05-17 14:13:58 +0000 UTC (0+3 container statuses recorded) May 17 14:16:28.167: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:28.167: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:28.167: INFO: Container mock ready: true, restart count 0 May 17 14:16:28.167: INFO: service-headless-toggled-dvx6b started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:28.167: INFO: service-proxy-toggled-djh27 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:28.167: INFO: kindnet-mq8qb started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:28.167: INFO: service-headless-9mh45 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:28.167: INFO: pod-a2ba308d-816a-406c-bf57-d5840e5d5387 started at 2022-05-17 14:15:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:28.167: INFO: affinity-nodeport-f2r7w started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:28.167: INFO: hostexec-kind-worker2-kwzz9 started at 2022-05-17 14:15:35 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:28.167: INFO: hostexec-kind-worker2-gh6gn started at 2022-05-17 14:15:19 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:28.167: INFO: pod-2d828178-7e03-4331-91b0-2165aa4eeee5 started at <nil> (0+0 container statuses recorded) May 17 14:16:28.167: INFO: csi-mockplugin-0 started at 2022-05-17 14:15:19 +0000 UTC (0+4 container statuses recorded) May 17 14:16:28.167: INFO: Container busybox ready: true, restart count 0 May 17 14:16:28.167: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:28.167: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:28.167: INFO: Container mock ready: true, restart count 0 May 17 14:16:28.167: INFO: hostexec-kind-worker2-k8cx7 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:28.167: INFO: pod-server-2 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:28.167: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:13:56 +0000 UTC (0+7 container statuses recorded) May 17 14:16:28.167: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:28.167: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:28.167: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:28.167: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:28.167: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:28.167: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:28.167: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:28.167: INFO: ss-0 started at 2022-05-17 14:15:25 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container webserver ready: true, restart count 0 May 17 14:16:28.167: INFO: pod-23740531-141e-4062-9ddb-69c15ab1b060 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.167: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:28.167: INFO: pvc-volume-tester-wh448 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.168: INFO: Container volume-tester ready: true, restart count 0 May 17 14:16:28.168: INFO: service-proxy-toggled-wrl7r started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.168: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:28.168: INFO: ss2-0 started at 2022-05-17 14:16:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.168: INFO: Container webserver ready: false, restart count 0 May 17 14:16:28.168: INFO: rs-vkdrm started at 2022-05-17 14:14:49 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.168: INFO: Container donothing ready: false, restart count 0 May 17 14:16:30.534: INFO: Latency metrics for node kind-worker2 May 17 14:16:30.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "provisioning-803" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\-link\-bindmounted\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(filesystem\svolmode\)\]\svolumeMode\sshould\snot\smount\s\/\smap\sunused\svolumes\sin\sa\spod\s\[LinuxOnly\]$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352 May 17 14:16:22.811: Unexpected error: <*errors.StatusError | 0xc00356b180>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:110from junit_05.xml
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":1,"skipped":0,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 17 14:16:05.258: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename volumemode �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not mount / map unused volumes in a pod [LinuxOnly] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352 May 17 14:16:05.537: INFO: In-tree plugin kubernetes.io/local-volume is not migrated, not validating any metrics May 17 14:16:22.811: FAIL: Unexpected error: <*errors.StatusError | 0xc00356b180>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).launchNodeExecPod(0xc003492200, 0xc003504cd5, 0xb, 0xb) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:110 +0x4b9 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).exec(0xc003492200, 0xc00365a160, 0x151, 0xc003660600, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:136 +0x3b5 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommandWithResult(0xc003492200, 0xc00365a160, 0x151, 0xc003660600, 0x5, 0x151, 0xc00365a160, 0xc00094a1a0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:169 +0x99 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommand(0xc003492200, 0xc00365a160, 0x151, 0xc003660600, 0x5, 0xc00365a160) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:178 +0x49 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).setupLocalVolumeDirectoryLinkBindMounted(0xc003607590, 0xc003660600, 0x0, 0xc0005ea001) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:257 +0x216 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Create(0xc003607590, 0xc003660600, 0x70b01fd, 0x14, 0x0, 0x68519e0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:302 +0x4ed k8s.io/kubernetes/test/e2e/storage/drivers.(*localDriver).CreateVolume(0xc0030a0d00, 0xc003618480, 0x7098411, 0x10, 0xc0030a0d00, 0x6d57d01) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1941 +0x144 k8s.io/kubernetes/test/e2e/storage/framework.CreateVolume(0x7911668, 0xc0030a0d00, 0xc003618480, 0x7098411, 0x10, 0x6f9faa0, 0xc00094a1a0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/driver_operations.go:43 +0x222 k8s.io/kubernetes/test/e2e/storage/framework.CreateVolumeResource(0x7911668, 0xc0030a0d00, 0xc003618480, 0x7130ad7, 0x27, 0x0, 0x0, 0x7098411, 0x10, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/volume_resource.go:65 +0x1e5 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func7() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:358 +0x238 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00087ec00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00087ec00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc00087ec00, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "volumemode-1759". �[1mSTEP�[0m: Found 3 events. May 17 14:16:25.567: INFO: At 2022-05-17 14:16:05 +0000 UTC - event for hostexec-kind-worker-kbzwd: {default-scheduler } Scheduled: Successfully assigned volumemode-1759/hostexec-kind-worker-kbzwd to kind-worker May 17 14:16:25.567: INFO: At 2022-05-17 14:16:07 +0000 UTC - event for hostexec-kind-worker-kbzwd: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 17 14:16:25.567: INFO: At 2022-05-17 14:16:08 +0000 UTC - event for hostexec-kind-worker-kbzwd: {kubelet kind-worker} Created: Created container agnhost-container May 17 14:16:25.659: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:16:25.660: INFO: hostexec-kind-worker-kbzwd kind-worker Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:16:05 +0000 UTC }] May 17 14:16:25.660: INFO: May 17 14:16:25.747: INFO: Logging node info for node kind-control-plane May 17 14:16:25.813: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 1fc1c875-88c2-4f79-b48f-f6ef32b124da 586 0 2022-05-17 14:12:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-17 14:12:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-17 14:12:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:13:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:97599c4ddadc4d3eae566d304d7d104d,SystemUUID:e786229d-3adc-40a9-a01d-f7a176ae9288,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:25.813: INFO: Logging kubelet events for node kind-control-plane May 17 14:16:25.878: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 17 14:16:26.016: INFO: kube-controller-manager-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.016: INFO: Container kube-controller-manager ready: false, restart count 0 May 17 14:16:26.016: INFO: kube-scheduler-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.016: INFO: Container kube-scheduler ready: false, restart count 0 May 17 14:16:26.016: INFO: kindnet-s6wb7 started at 2022-05-17 14:13:07 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.016: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:26.016: INFO: etcd-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.016: INFO: Container etcd ready: true, restart count 0 May 17 14:16:26.016: INFO: kube-apiserver-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.016: INFO: Container kube-apiserver ready: true, restart count 0 May 17 14:16:26.016: INFO: coredns-78fcd69978-r97ls started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.016: INFO: Container coredns ready: true, restart count 0 May 17 14:16:26.016: INFO: kube-proxy-5tzjs started at 2022-05-17 14:13:26 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.017: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:26.017: INFO: coredns-78fcd69978-b6zn2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.017: INFO: Container coredns ready: true, restart count 0 May 17 14:16:26.017: INFO: local-path-provisioner-6c9449b9dd-bllk2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.017: INFO: Container local-path-provisioner ready: true, restart count 0 May 17 14:16:26.516: INFO: Latency metrics for node kind-control-plane May 17 14:16:26.516: INFO: Logging node info for node kind-worker May 17 14:16:26.664: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5ba69ac9-ccd5-447d-9f27-9bee10b23121 6999 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9914":"kind-worker","csi-mock-csi-mock-volumes-7782":"csi-mock-csi-mock-volumes-7782"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-05-17 14:15:50 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-17 14:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7e8057f975041deb201a063c162c158,SystemUUID:fc207ccb-2e60-4380-b38c-671b606f274a,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155,DevicePath:,},},Config:nil,},} May 17 14:16:26.665: INFO: Logging kubelet events for node kind-worker May 17 14:16:26.769: INFO: Logging pods the kubelet thinks is on node kind-worker May 17 14:16:26.863: INFO: affinity-nodeport-xj24q started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:26.863: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:15:44 +0000 UTC (0+7 container statuses recorded) May 17 14:16:26.863: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:26.863: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:26.863: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:26.863: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:26.863: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:26.863: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:26.863: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:26.863: INFO: deployment-55649fd747-6dbnz started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container nginx ready: false, restart count 0 May 17 14:16:26.863: INFO: send-events-4adc3414-703f-4b62-a42c-926d4b4b6fc2 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.863: INFO: termination-message-container5de5961b-b8bb-4868-adb0-b23802f044ae started at <nil> (0+0 container statuses recorded) May 17 14:16:26.863: INFO: service-proxy-toggled-mlq7q started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:26.863: INFO: hostexec-kind-worker-6t6pq started at 2022-05-17 14:15:52 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.863: INFO: agnhost-primary-5db8ddd565-ktltd started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container primary ready: true, restart count 0 May 17 14:16:26.863: INFO: agnhost-replica-6bcf79b489-885sh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container replica ready: true, restart count 0 May 17 14:16:26.863: INFO: service-headless-tgqlh started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:26.863: INFO: deployment-55649fd747-f5bss started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container nginx ready: false, restart count 0 May 17 14:16:26.863: INFO: pod-terminate-status-2-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.863: INFO: service-proxy-disabled-qk4md started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:26.863: INFO: busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 started at 2022-05-17 14:15:24 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 ready: true, restart count 0 May 17 14:16:26.863: INFO: pod-b8564c48-8aa2-4bdb-a5eb-256f8afd47cc started at <nil> (0+0 container statuses recorded) May 17 14:16:26.863: INFO: concurrent-27546615-rw8md started at 2022-05-17 14:15:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container c ready: true, restart count 0 May 17 14:16:26.863: INFO: hostexec-kind-worker-kbzwd started at <nil> (0+0 container statuses recorded) May 17 14:16:26.863: INFO: pod-qos-class-5c206c25-c993-4e12-9d89-06931c05bb47 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.863: INFO: service-headless-q5g89 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:26.863: INFO: service-headless-toggled-9zh79 started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:26.863: INFO: service-headless-toggled-nw4rn started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:26.863: INFO: inline-volume-tester-h6pnw started at <nil> (0+0 container statuses recorded) May 17 14:16:26.863: INFO: frontend-685fc574d5-64jqm started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:26.863: INFO: kube-proxy-dr9f5 started at 2022-05-17 14:13:23 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:26.863: INFO: pod-terminate-status-0-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.863: INFO: hostexec-kind-worker-gzkkg started at 2022-05-17 14:15:51 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.863: INFO: hostexec-kind-worker-28fdh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container agnhost-container ready: false, restart count 0 May 17 14:16:26.863: INFO: pod-client started at 2022-05-17 14:15:09 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container pod-client ready: true, restart count 0 May 17 14:16:26.863: INFO: liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931 started at 2022-05-17 14:14:13 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.863: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.863: INFO: concurrent-27546616-5t86w started at <nil> (0+0 container statuses recorded) May 17 14:16:26.863: INFO: kindnet-56p79 started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.864: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:26.864: INFO: csi-mockplugin-0 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.864: INFO: implicit-nonroot-uid started at <nil> (0+0 container statuses recorded) May 17 14:16:26.864: INFO: ss-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:28.424: INFO: Latency metrics for node kind-worker May 17 14:16:28.424: INFO: Logging node info for node kind-worker2 May 17 14:16:28.449: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 d009e97f-d73b-4ed0-8afd-9f776bcb704d 6998 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2591":"kind-worker2","csi-mock-csi-mock-volumes-8289":"csi-mock-csi-mock-volumes-8289"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-17 14:15:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:521433a178fd4494a7881394e8d644cf,SystemUUID:c86f292f-55cb-49c2-9dda-efb91a06dab9,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:28.450: INFO: Logging kubelet events for node kind-worker2 May 17 14:16:28.496: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 17 14:16:28.680: INFO: frontend-685fc574d5-mj2wn started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.680: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:28.680: INFO: service-proxy-disabled-brdhb started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.680: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:28.680: INFO: pod-e8401689-4cef-4e20-9f2e-264844f9d704 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.680: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:28.680: INFO: service-proxy-disabled-xzkc8 started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.680: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:28.680: INFO: frontend-685fc574d5-lc6wl started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.680: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:28.680: INFO: csi-mockplugin-attacher-0 started at 2022-05-17 14:13:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.680: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:28.680: INFO: kube-proxy-sdmmb started at 2022-05-17 14:13:29 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:28.681: INFO: agnhost-replica-6bcf79b489-xd4h9 started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container replica ready: true, restart count 0 May 17 14:16:28.681: INFO: pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8 started at 2022-05-17 14:16:03 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:28.681: INFO: service-proxy-toggled-djh27 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:28.681: INFO: inline-volume-tester-tx6st started at 2022-05-17 14:13:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container csi-volume-tester ready: true, restart count 0 May 17 14:16:28.681: INFO: csi-mockplugin-0 started at 2022-05-17 14:13:58 +0000 UTC (0+3 container statuses recorded) May 17 14:16:28.681: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:28.681: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:28.681: INFO: Container mock ready: true, restart count 0 May 17 14:16:28.681: INFO: service-headless-toggled-dvx6b started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:28.681: INFO: affinity-nodeport-f2r7w started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:28.681: INFO: hostexec-kind-worker2-kwzz9 started at 2022-05-17 14:15:35 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:28.681: INFO: kindnet-mq8qb started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:28.681: INFO: service-headless-9mh45 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:28.681: INFO: pod-a2ba308d-816a-406c-bf57-d5840e5d5387 started at 2022-05-17 14:15:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:28.681: INFO: pod-server-2 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:28.681: INFO: hostexec-kind-worker2-gh6gn started at 2022-05-17 14:15:19 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:28.681: INFO: pod-2d828178-7e03-4331-91b0-2165aa4eeee5 started at <nil> (0+0 container statuses recorded) May 17 14:16:28.681: INFO: csi-mockplugin-0 started at 2022-05-17 14:15:19 +0000 UTC (0+4 container statuses recorded) May 17 14:16:28.681: INFO: Container busybox ready: true, restart count 0 May 17 14:16:28.681: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:28.681: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:28.681: INFO: Container mock ready: true, restart count 0 May 17 14:16:28.681: INFO: hostexec-kind-worker2-k8cx7 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:28.681: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:13:56 +0000 UTC (0+7 container statuses recorded) May 17 14:16:28.681: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:28.681: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:28.681: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:28.681: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:28.681: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:28.681: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:28.681: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:28.681: INFO: ss-0 started at 2022-05-17 14:15:25 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container webserver ready: true, restart count 0 May 17 14:16:28.681: INFO: pod-23740531-141e-4062-9ddb-69c15ab1b060 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:28.681: INFO: pvc-volume-tester-wh448 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container volume-tester ready: true, restart count 0 May 17 14:16:28.681: INFO: service-proxy-toggled-wrl7r started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:28.681: INFO: ss2-0 started at 2022-05-17 14:16:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container webserver ready: false, restart count 0 May 17 14:16:28.681: INFO: rs-vkdrm started at 2022-05-17 14:14:49 +0000 UTC (0+1 container statuses recorded) May 17 14:16:28.681: INFO: Container donothing ready: false, restart count 0 May 17 14:16:30.881: INFO: Latency metrics for node kind-worker2 May 17 14:16:30.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "volumemode-1759" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sPersistentVolumes\-local\s\s\[Volume\stype\:\sblock\]\sOne\spod\srequesting\sone\sprebound\sPVC\sshould\sbe\sable\sto\smount\svolume\sand\swrite\sfrom\spod1$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 May 17 14:16:22.815: Unexpected error: <*errors.errorString | 0xc0032c45a0>: { s: "pod \"pod-a2ba308d-816a-406c-bf57-d5840e5d5387\" is not Running: etcdserver: request timed out", } pod "pod-a2ba308d-816a-406c-bf57-d5840e5d5387" is not Running: etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:218from junit_02.xml
[BeforeEach] [sig-storage] PersistentVolumes-local /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 17 14:15:45.326: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename persistent-local-volumes-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 �[1mSTEP�[0m: Initializing test volumes �[1mSTEP�[0m: Creating block device on node "kind-worker2" using path "/tmp/local-volume-test-6f456323-ce20-43f1-8ab5-60525eb98adf" May 17 14:15:55.588: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6f456323-ce20-43f1-8ab5-60525eb98adf && dd if=/dev/zero of=/tmp/local-volume-test-6f456323-ce20-43f1-8ab5-60525eb98adf/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-6f456323-ce20-43f1-8ab5-60525eb98adf/file] Namespace:persistent-local-volumes-test-1929 PodName:hostexec-kind-worker2-k8cx7 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 17 14:15:55.588: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 17 14:15:55.941: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6f456323-ce20-43f1-8ab5-60525eb98adf/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1929 PodName:hostexec-kind-worker2-k8cx7 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 17 14:15:55.941: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Creating local PVCs and PVs May 17 14:15:56.164: INFO: Creating a PV followed by a PVC May 17 14:15:56.204: INFO: Waiting for PV local-pvw9zxr to bind to PVC pvc-dkhnz May 17 14:15:56.204: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-dkhnz] to have phase Bound May 17 14:15:56.265: INFO: PersistentVolumeClaim pvc-dkhnz found but phase is Pending instead of Bound. May 17 14:15:58.278: INFO: PersistentVolumeClaim pvc-dkhnz found and phase=Bound (2.074016321s) May 17 14:15:58.279: INFO: Waiting up to 3m0s for PersistentVolume local-pvw9zxr to have phase Bound May 17 14:15:58.292: INFO: PersistentVolume local-pvw9zxr found and phase=Bound (13.541003ms) [BeforeEach] One pod requesting one prebound PVC /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 �[1mSTEP�[0m: Creating pod1 �[1mSTEP�[0m: Creating a pod May 17 14:16:22.815: FAIL: Unexpected error: <*errors.errorString | 0xc0032c45a0>: { s: "pod \"pod-a2ba308d-816a-406c-bf57-d5840e5d5387\" is not Running: etcdserver: request timed out", } pod "pod-a2ba308d-816a-406c-bf57-d5840e5d5387" is not Running: etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func21.2.3.1() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:218 +0x11b k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000cf1200) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000cf1200) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000cf1200, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] One pod requesting one prebound PVC /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 �[1mSTEP�[0m: Deleting pod1 �[1mSTEP�[0m: Deleting pod pod-a2ba308d-816a-406c-bf57-d5840e5d5387 in namespace persistent-local-volumes-test-1929 [AfterEach] [Volume type: block] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 �[1mSTEP�[0m: Cleaning up PVC and PV May 17 14:16:25.701: INFO: Deleting PersistentVolumeClaim "pvc-dkhnz" May 17 14:16:25.815: INFO: Deleting PersistentVolume "local-pvw9zxr" May 17 14:16:25.960: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6f456323-ce20-43f1-8ab5-60525eb98adf/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1929 PodName:hostexec-kind-worker2-k8cx7 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 17 14:16:25.960: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Tear down block device "/dev/loop0" on node "kind-worker2" at path /tmp/local-volume-test-6f456323-ce20-43f1-8ab5-60525eb98adf/file May 17 14:16:26.353: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-1929 PodName:hostexec-kind-worker2-k8cx7 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 17 14:16:26.353: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Removing the test directory /tmp/local-volume-test-6f456323-ce20-43f1-8ab5-60525eb98adf May 17 14:16:26.662: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6f456323-ce20-43f1-8ab5-60525eb98adf] Namespace:persistent-local-volumes-test-1929 PodName:hostexec-kind-worker2-k8cx7 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 17 14:16:26.662: INFO: >>> kubeConfig: /root/.kube/kind-test-config [AfterEach] [sig-storage] PersistentVolumes-local /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "persistent-local-volumes-test-1929". �[1mSTEP�[0m: Found 10 events. May 17 14:16:27.246: INFO: At 2022-05-17 14:15:45 +0000 UTC - event for hostexec-kind-worker2-k8cx7: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-1929/hostexec-kind-worker2-k8cx7 to kind-worker2 May 17 14:16:27.246: INFO: At 2022-05-17 14:15:46 +0000 UTC - event for hostexec-kind-worker2-k8cx7: {kubelet kind-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 17 14:16:27.246: INFO: At 2022-05-17 14:15:46 +0000 UTC - event for hostexec-kind-worker2-k8cx7: {kubelet kind-worker2} Created: Created container agnhost-container May 17 14:16:27.246: INFO: At 2022-05-17 14:15:47 +0000 UTC - event for hostexec-kind-worker2-k8cx7: {kubelet kind-worker2} Started: Started container agnhost-container May 17 14:16:27.246: INFO: At 2022-05-17 14:15:58 +0000 UTC - event for pod-a2ba308d-816a-406c-bf57-d5840e5d5387: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-1929/pod-a2ba308d-816a-406c-bf57-d5840e5d5387 to kind-worker2 May 17 14:16:27.246: INFO: At 2022-05-17 14:16:00 +0000 UTC - event for pod-a2ba308d-816a-406c-bf57-d5840e5d5387: {kubelet kind-worker2} SuccessfulMountVolume: MapVolume.MapPodDevice succeeded for volume "local-pvw9zxr" globalMapPath "/var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/local-pvw9zxr" May 17 14:16:27.246: INFO: At 2022-05-17 14:16:00 +0000 UTC - event for pod-a2ba308d-816a-406c-bf57-d5840e5d5387: {kubelet kind-worker2} SuccessfulMountVolume: MapVolume.MapPodDevice succeeded for volume "local-pvw9zxr" volumeMapPath "/var/lib/kubelet/pods/bef09a2c-6fee-4627-9568-b1b8ff7491e9/volumeDevices/kubernetes.io~local-volume" May 17 14:16:27.246: INFO: At 2022-05-17 14:16:01 +0000 UTC - event for pod-a2ba308d-816a-406c-bf57-d5840e5d5387: {kubelet kind-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine May 17 14:16:27.246: INFO: At 2022-05-17 14:16:02 +0000 UTC - event for pod-a2ba308d-816a-406c-bf57-d5840e5d5387: {kubelet kind-worker2} Created: Created container write-pod May 17 14:16:27.246: INFO: At 2022-05-17 14:16:02 +0000 UTC - event for pod-a2ba308d-816a-406c-bf57-d5840e5d5387: {kubelet kind-worker2} Started: Started container write-pod May 17 14:16:27.251: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:16:27.251: INFO: hostexec-kind-worker2-k8cx7 kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:45 +0000 UTC }] May 17 14:16:27.251: INFO: pod-a2ba308d-816a-406c-bf57-d5840e5d5387 kind-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:16:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:16:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:58 +0000 UTC }] May 17 14:16:27.251: INFO: May 17 14:16:27.267: INFO: Logging node info for node kind-control-plane May 17 14:16:27.287: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 1fc1c875-88c2-4f79-b48f-f6ef32b124da 586 0 2022-05-17 14:12:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-17 14:12:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-17 14:12:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:13:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:97599c4ddadc4d3eae566d304d7d104d,SystemUUID:e786229d-3adc-40a9-a01d-f7a176ae9288,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:27.287: INFO: Logging kubelet events for node kind-control-plane May 17 14:16:27.296: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 17 14:16:27.323: INFO: coredns-78fcd69978-b6zn2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.323: INFO: Container coredns ready: true, restart count 0 May 17 14:16:27.323: INFO: local-path-provisioner-6c9449b9dd-bllk2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.323: INFO: Container local-path-provisioner ready: true, restart count 0 May 17 14:16:27.323: INFO: coredns-78fcd69978-r97ls started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.324: INFO: Container coredns ready: true, restart count 0 May 17 14:16:27.324: INFO: kube-proxy-5tzjs started at 2022-05-17 14:13:26 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.324: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:27.324: INFO: etcd-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.324: INFO: Container etcd ready: true, restart count 0 May 17 14:16:27.324: INFO: kube-apiserver-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.324: INFO: Container kube-apiserver ready: true, restart count 0 May 17 14:16:27.324: INFO: kube-controller-manager-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.324: INFO: Container kube-controller-manager ready: false, restart count 0 May 17 14:16:27.324: INFO: kube-scheduler-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.324: INFO: Container kube-scheduler ready: false, restart count 0 May 17 14:16:27.324: INFO: kindnet-s6wb7 started at 2022-05-17 14:13:07 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.324: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:27.539: INFO: Latency metrics for node kind-control-plane May 17 14:16:27.539: INFO: Logging node info for node kind-worker May 17 14:16:27.551: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5ba69ac9-ccd5-447d-9f27-9bee10b23121 6999 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9914":"kind-worker","csi-mock-csi-mock-volumes-7782":"csi-mock-csi-mock-volumes-7782"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-05-17 14:15:50 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-17 14:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7e8057f975041deb201a063c162c158,SystemUUID:fc207ccb-2e60-4380-b38c-671b606f274a,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155,DevicePath:,},},Config:nil,},} May 17 14:16:27.552: INFO: Logging kubelet events for node kind-worker May 17 14:16:27.568: INFO: Logging pods the kubelet thinks is on node kind-worker May 17 14:16:27.670: INFO: kindnet-56p79 started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.670: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:27.670: INFO: csi-mockplugin-0 started at <nil> (0+0 container statuses recorded) May 17 14:16:27.670: INFO: implicit-nonroot-uid started at <nil> (0+0 container statuses recorded) May 17 14:16:27.670: INFO: ss-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:27.670: INFO: affinity-nodeport-xj24q started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.670: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:27.670: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:15:44 +0000 UTC (0+7 container statuses recorded) May 17 14:16:27.670: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:27.670: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:27.670: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:27.670: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:27.670: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:27.670: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:27.670: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:27.670: INFO: deployment-55649fd747-6dbnz started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.670: INFO: Container nginx ready: false, restart count 0 May 17 14:16:27.670: INFO: agnhost-primary-5db8ddd565-ktltd started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.670: INFO: Container primary ready: true, restart count 0 May 17 14:16:27.670: INFO: send-events-4adc3414-703f-4b62-a42c-926d4b4b6fc2 started at <nil> (0+0 container statuses recorded) May 17 14:16:27.670: INFO: termination-message-container5de5961b-b8bb-4868-adb0-b23802f044ae started at <nil> (0+0 container statuses recorded) May 17 14:16:27.670: INFO: service-proxy-toggled-mlq7q started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.670: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:27.670: INFO: hostexec-kind-worker-6t6pq started at 2022-05-17 14:15:52 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.670: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:27.670: INFO: agnhost-replica-6bcf79b489-885sh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.670: INFO: Container replica ready: true, restart count 0 May 17 14:16:27.670: INFO: hostexec-kind-worker-kbzwd started at <nil> (0+0 container statuses recorded) May 17 14:16:27.671: INFO: service-headless-tgqlh started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.671: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:27.671: INFO: deployment-55649fd747-f5bss started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.671: INFO: Container nginx ready: false, restart count 0 May 17 14:16:27.671: INFO: pod-terminate-status-2-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:27.671: INFO: service-proxy-disabled-qk4md started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.671: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:27.671: INFO: busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 started at 2022-05-17 14:15:24 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.671: INFO: Container busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 ready: true, restart count 0 May 17 14:16:27.671: INFO: pod-b8564c48-8aa2-4bdb-a5eb-256f8afd47cc started at <nil> (0+0 container statuses recorded) May 17 14:16:27.671: INFO: concurrent-27546615-rw8md started at 2022-05-17 14:15:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.671: INFO: Container c ready: true, restart count 0 May 17 14:16:27.671: INFO: pod-qos-class-5c206c25-c993-4e12-9d89-06931c05bb47 started at 2022-05-17 14:16:07 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.671: INFO: Container agnhost ready: false, restart count 0 May 17 14:16:27.671: INFO: frontend-685fc574d5-64jqm started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.671: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:27.671: INFO: service-headless-q5g89 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.671: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:27.671: INFO: service-headless-toggled-9zh79 started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.671: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:27.671: INFO: service-headless-toggled-nw4rn started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.671: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:27.671: INFO: inline-volume-tester-h6pnw started at <nil> (0+0 container statuses recorded) May 17 14:16:27.671: INFO: kube-proxy-dr9f5 started at 2022-05-17 14:13:23 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.671: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:27.671: INFO: pod-terminate-status-0-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:27.671: INFO: hostexec-kind-worker-gzkkg started at 2022-05-17 14:15:51 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.671: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:27.671: INFO: hostexec-kind-worker-28fdh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.671: INFO: Container agnhost-container ready: false, restart count 0 May 17 14:16:27.671: INFO: pod-client started at 2022-05-17 14:15:09 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.671: INFO: Container pod-client ready: true, restart count 0 May 17 14:16:27.671: INFO: liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931 started at 2022-05-17 14:14:13 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.671: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:27.671: INFO: concurrent-27546616-5t86w started at <nil> (0+0 container statuses recorded) May 17 14:16:29.014: INFO: Latency metrics for node kind-worker May 17 14:16:29.014: INFO: Logging node info for node kind-worker2 May 17 14:16:29.082: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 d009e97f-d73b-4ed0-8afd-9f776bcb704d 6998 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2591":"kind-worker2","csi-mock-csi-mock-volumes-8289":"csi-mock-csi-mock-volumes-8289"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-17 14:15:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:521433a178fd4494a7881394e8d644cf,SystemUUID:c86f292f-55cb-49c2-9dda-efb91a06dab9,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:29.082: INFO: Logging kubelet events for node kind-worker2 May 17 14:16:29.184: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 17 14:16:29.448: INFO: pod-server-2 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.448: INFO: hostexec-kind-worker2-gh6gn started at 2022-05-17 14:15:19 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.448: INFO: pod-2d828178-7e03-4331-91b0-2165aa4eeee5 started at <nil> (0+0 container statuses recorded) May 17 14:16:29.448: INFO: csi-mockplugin-0 started at 2022-05-17 14:15:19 +0000 UTC (0+4 container statuses recorded) May 17 14:16:29.448: INFO: Container busybox ready: true, restart count 0 May 17 14:16:29.448: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:29.448: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:29.448: INFO: Container mock ready: true, restart count 0 May 17 14:16:29.448: INFO: hostexec-kind-worker2-k8cx7 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.448: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:13:56 +0000 UTC (0+7 container statuses recorded) May 17 14:16:29.448: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:29.448: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:29.448: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:29.448: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:29.448: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:29.448: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:29.448: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:29.448: INFO: ss-0 started at 2022-05-17 14:15:25 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container webserver ready: true, restart count 0 May 17 14:16:29.448: INFO: pod-23740531-141e-4062-9ddb-69c15ab1b060 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:29.448: INFO: pvc-volume-tester-wh448 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container volume-tester ready: true, restart count 0 May 17 14:16:29.448: INFO: service-proxy-toggled-wrl7r started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:29.448: INFO: ss2-0 started at 2022-05-17 14:16:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container webserver ready: false, restart count 0 May 17 14:16:29.448: INFO: rs-vkdrm started at 2022-05-17 14:14:49 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container donothing ready: false, restart count 0 May 17 14:16:29.448: INFO: frontend-685fc574d5-mj2wn started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:29.448: INFO: service-proxy-disabled-brdhb started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:29.448: INFO: service-proxy-disabled-xzkc8 started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:29.448: INFO: frontend-685fc574d5-lc6wl started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:29.448: INFO: csi-mockplugin-attacher-0 started at 2022-05-17 14:13:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container csi-attacher ready: false, restart count 0 May 17 14:16:29.448: INFO: kube-proxy-sdmmb started at 2022-05-17 14:13:29 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:29.448: INFO: agnhost-replica-6bcf79b489-xd4h9 started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container replica ready: true, restart count 0 May 17 14:16:29.448: INFO: pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8 started at 2022-05-17 14:16:03 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:29.448: INFO: service-proxy-toggled-djh27 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:29.448: INFO: inline-volume-tester-tx6st started at 2022-05-17 14:13:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container csi-volume-tester ready: true, restart count 0 May 17 14:16:29.448: INFO: csi-mockplugin-0 started at 2022-05-17 14:13:58 +0000 UTC (0+3 container statuses recorded) May 17 14:16:29.448: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:29.448: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:29.448: INFO: Container mock ready: true, restart count 0 May 17 14:16:29.448: INFO: service-headless-toggled-dvx6b started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:29.448: INFO: affinity-nodeport-f2r7w started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:29.448: INFO: hostexec-kind-worker2-kwzz9 started at 2022-05-17 14:15:35 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.448: INFO: kindnet-mq8qb started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:29.448: INFO: service-headless-9mh45 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:29.448: INFO: pod-a2ba308d-816a-406c-bf57-d5840e5d5387 started at 2022-05-17 14:15:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.448: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:30.979: INFO: Latency metrics for node kind-worker2 May 17 14:16:30.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "persistent-local-volumes-test-1929" for this suite.
Find pod-a2ba308d-816a-406c-bf57-d5840e5d5387 mentions in log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sPersistentVolumes\-local\s\s\[Volume\stype\:\sdir\-bindmounted\]\sTwo\spods\smounting\sa\slocal\svolume\sone\safter\sthe\sother\sshould\sbe\sable\sto\swrite\sfrom\spod1\sand\sread\sfrom\spod2$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 May 17 14:16:22.836: Unexpected error: <*errors.errorString | 0xc003b844c0>: { s: "pod \"pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8\" is not Running: etcdserver: request timed out", } pod "pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8" is not Running: etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:789from junit_14.xml
[BeforeEach] [sig-storage] PersistentVolumes-local /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 17 14:15:35.071: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename persistent-local-volumes-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 �[1mSTEP�[0m: Initializing test volumes May 17 14:15:51.268: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-6590c4df-bc1b-4727-9f9b-404955281dda && mount --bind /tmp/local-volume-test-6590c4df-bc1b-4727-9f9b-404955281dda /tmp/local-volume-test-6590c4df-bc1b-4727-9f9b-404955281dda] Namespace:persistent-local-volumes-test-4290 PodName:hostexec-kind-worker2-kwzz9 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 17 14:15:51.268: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Creating local PVCs and PVs May 17 14:15:51.410: INFO: Creating a PV followed by a PVC May 17 14:15:51.439: INFO: Waiting for PV local-pv8kljv to bind to PVC pvc-fr9ws May 17 14:15:51.440: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-fr9ws] to have phase Bound May 17 14:15:51.464: INFO: PersistentVolumeClaim pvc-fr9ws found but phase is Pending instead of Bound. May 17 14:15:53.471: INFO: PersistentVolumeClaim pvc-fr9ws found and phase=Bound (2.031154673s) May 17 14:15:53.471: INFO: Waiting up to 3m0s for PersistentVolume local-pv8kljv to have phase Bound May 17 14:15:53.474: INFO: PersistentVolume local-pv8kljv found and phase=Bound (3.296202ms) [It] should be able to write from pod1 and read from pod2 /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 �[1mSTEP�[0m: Creating pod1 �[1mSTEP�[0m: Creating a pod May 17 14:16:03.511: INFO: pod "pod-e8401689-4cef-4e20-9f2e-264844f9d704" created on Node "kind-worker2" �[1mSTEP�[0m: Writing in pod1 May 17 14:16:03.511: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4290 PodName:pod-e8401689-4cef-4e20-9f2e-264844f9d704 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 17 14:16:03.511: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 17 14:16:03.652: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: <nil> May 17 14:16:03.652: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4290 PodName:pod-e8401689-4cef-4e20-9f2e-264844f9d704 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 17 14:16:03.652: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 17 14:16:03.778: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: <nil> �[1mSTEP�[0m: Deleting pod1 �[1mSTEP�[0m: Deleting pod pod-e8401689-4cef-4e20-9f2e-264844f9d704 in namespace persistent-local-volumes-test-4290 �[1mSTEP�[0m: Creating pod2 �[1mSTEP�[0m: Creating a pod May 17 14:16:22.836: FAIL: Unexpected error: <*errors.errorString | 0xc003b844c0>: { s: "pod \"pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8\" is not Running: etcdserver: request timed out", } pod "pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8" is not Running: etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.twoPodsReadWriteSerialTest(0xc003f42580, 0xc003e62510, 0xc0039c4030) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:789 +0x329 k8s.io/kubernetes/test/e2e/storage.glob..func21.2.5.1() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:256 +0x45 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000583e00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000583e00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000583e00, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [Volume type: dir-bindmounted] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 �[1mSTEP�[0m: Cleaning up PVC and PV May 17 14:16:22.837: INFO: Deleting PersistentVolumeClaim "pvc-fr9ws" May 17 14:16:25.560: INFO: Deleting PersistentVolume "local-pv8kljv" �[1mSTEP�[0m: Removing the test directory May 17 14:16:25.694: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-6590c4df-bc1b-4727-9f9b-404955281dda && rm -r /tmp/local-volume-test-6590c4df-bc1b-4727-9f9b-404955281dda] Namespace:persistent-local-volumes-test-4290 PodName:hostexec-kind-worker2-kwzz9 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 17 14:16:25.694: INFO: >>> kubeConfig: /root/.kube/kind-test-config [AfterEach] [sig-storage] PersistentVolumes-local /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "persistent-local-volumes-test-4290". �[1mSTEP�[0m: Found 13 events. May 17 14:16:26.242: INFO: At 2022-05-17 14:15:35 +0000 UTC - event for hostexec-kind-worker2-kwzz9: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-4290/hostexec-kind-worker2-kwzz9 to kind-worker2 May 17 14:16:26.242: INFO: At 2022-05-17 14:15:36 +0000 UTC - event for hostexec-kind-worker2-kwzz9: {kubelet kind-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 17 14:16:26.242: INFO: At 2022-05-17 14:15:36 +0000 UTC - event for hostexec-kind-worker2-kwzz9: {kubelet kind-worker2} Created: Created container agnhost-container May 17 14:16:26.242: INFO: At 2022-05-17 14:15:37 +0000 UTC - event for hostexec-kind-worker2-kwzz9: {kubelet kind-worker2} Started: Started container agnhost-container May 17 14:16:26.242: INFO: At 2022-05-17 14:15:53 +0000 UTC - event for pod-e8401689-4cef-4e20-9f2e-264844f9d704: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-4290/pod-e8401689-4cef-4e20-9f2e-264844f9d704 to kind-worker2 May 17 14:16:26.242: INFO: At 2022-05-17 14:15:55 +0000 UTC - event for pod-e8401689-4cef-4e20-9f2e-264844f9d704: {kubelet kind-worker2} Started: Started container write-pod May 17 14:16:26.242: INFO: At 2022-05-17 14:15:55 +0000 UTC - event for pod-e8401689-4cef-4e20-9f2e-264844f9d704: {kubelet kind-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine May 17 14:16:26.242: INFO: At 2022-05-17 14:15:55 +0000 UTC - event for pod-e8401689-4cef-4e20-9f2e-264844f9d704: {kubelet kind-worker2} Created: Created container write-pod May 17 14:16:26.242: INFO: At 2022-05-17 14:16:03 +0000 UTC - event for pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-4290/pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8 to kind-worker2 May 17 14:16:26.242: INFO: At 2022-05-17 14:16:03 +0000 UTC - event for pod-e8401689-4cef-4e20-9f2e-264844f9d704: {kubelet kind-worker2} Killing: Stopping container write-pod May 17 14:16:26.242: INFO: At 2022-05-17 14:16:06 +0000 UTC - event for pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8: {kubelet kind-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine May 17 14:16:26.242: INFO: At 2022-05-17 14:16:07 +0000 UTC - event for pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8: {kubelet kind-worker2} Started: Started container write-pod May 17 14:16:26.242: INFO: At 2022-05-17 14:16:07 +0000 UTC - event for pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8: {kubelet kind-worker2} Created: Created container write-pod May 17 14:16:26.301: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:16:26.301: INFO: hostexec-kind-worker2-kwzz9 kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:35 +0000 UTC }] May 17 14:16:26.301: INFO: pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8 kind-worker2 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:16:03 +0000 UTC }] May 17 14:16:26.301: INFO: pod-e8401689-4cef-4e20-9f2e-264844f9d704 kind-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:53 +0000 UTC }] May 17 14:16:26.301: INFO: May 17 14:16:26.373: INFO: Logging node info for node kind-control-plane May 17 14:16:26.441: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 1fc1c875-88c2-4f79-b48f-f6ef32b124da 586 0 2022-05-17 14:12:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-17 14:12:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-17 14:12:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:13:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:97599c4ddadc4d3eae566d304d7d104d,SystemUUID:e786229d-3adc-40a9-a01d-f7a176ae9288,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:26.442: INFO: Logging kubelet events for node kind-control-plane May 17 14:16:26.471: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 17 14:16:26.553: INFO: local-path-provisioner-6c9449b9dd-bllk2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.553: INFO: Container local-path-provisioner ready: true, restart count 0 May 17 14:16:26.553: INFO: coredns-78fcd69978-r97ls started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.553: INFO: Container coredns ready: true, restart count 0 May 17 14:16:26.553: INFO: kube-proxy-5tzjs started at 2022-05-17 14:13:26 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.553: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:26.553: INFO: coredns-78fcd69978-b6zn2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.553: INFO: Container coredns ready: true, restart count 0 May 17 14:16:26.553: INFO: kube-apiserver-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.553: INFO: Container kube-apiserver ready: true, restart count 0 May 17 14:16:26.553: INFO: kube-controller-manager-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.553: INFO: Container kube-controller-manager ready: false, restart count 0 May 17 14:16:26.553: INFO: kube-scheduler-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.553: INFO: Container kube-scheduler ready: false, restart count 0 May 17 14:16:26.553: INFO: kindnet-s6wb7 started at 2022-05-17 14:13:07 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.553: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:26.553: INFO: etcd-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.553: INFO: Container etcd ready: true, restart count 0 May 17 14:16:27.239: INFO: Latency metrics for node kind-control-plane May 17 14:16:27.240: INFO: Logging node info for node kind-worker May 17 14:16:27.267: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5ba69ac9-ccd5-447d-9f27-9bee10b23121 6999 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9914":"kind-worker","csi-mock-csi-mock-volumes-7782":"csi-mock-csi-mock-volumes-7782"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-05-17 14:15:50 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-17 14:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7e8057f975041deb201a063c162c158,SystemUUID:fc207ccb-2e60-4380-b38c-671b606f274a,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155,DevicePath:,},},Config:nil,},} May 17 14:16:27.268: INFO: Logging kubelet events for node kind-worker May 17 14:16:27.298: INFO: Logging pods the kubelet thinks is on node kind-worker May 17 14:16:27.351: INFO: hostexec-kind-worker-gzkkg started at 2022-05-17 14:15:51 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.351: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:27.352: INFO: hostexec-kind-worker-28fdh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.352: INFO: Container agnhost-container ready: false, restart count 0 May 17 14:16:27.352: INFO: pod-client started at 2022-05-17 14:15:09 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.352: INFO: Container pod-client ready: true, restart count 0 May 17 14:16:27.352: INFO: liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931 started at 2022-05-17 14:14:13 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.352: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:27.352: INFO: concurrent-27546616-5t86w started at <nil> (0+0 container statuses recorded) May 17 14:16:27.352: INFO: kindnet-56p79 started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.352: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:27.352: INFO: csi-mockplugin-0 started at <nil> (0+0 container statuses recorded) May 17 14:16:27.352: INFO: implicit-nonroot-uid started at <nil> (0+0 container statuses recorded) May 17 14:16:27.352: INFO: ss-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:27.352: INFO: affinity-nodeport-xj24q started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.352: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:27.352: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:15:44 +0000 UTC (0+7 container statuses recorded) May 17 14:16:27.352: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:27.352: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:27.352: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:27.352: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:27.352: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:27.352: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:27.352: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:27.352: INFO: deployment-55649fd747-6dbnz started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.352: INFO: Container nginx ready: false, restart count 0 May 17 14:16:27.352: INFO: service-proxy-toggled-mlq7q started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.352: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:27.352: INFO: hostexec-kind-worker-6t6pq started at 2022-05-17 14:15:52 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.352: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:27.352: INFO: agnhost-primary-5db8ddd565-ktltd started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.352: INFO: Container primary ready: true, restart count 0 May 17 14:16:27.352: INFO: send-events-4adc3414-703f-4b62-a42c-926d4b4b6fc2 started at <nil> (0+0 container statuses recorded) May 17 14:16:27.352: INFO: termination-message-container5de5961b-b8bb-4868-adb0-b23802f044ae started at <nil> (0+0 container statuses recorded) May 17 14:16:27.352: INFO: agnhost-replica-6bcf79b489-885sh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.352: INFO: Container replica ready: true, restart count 0 May 17 14:16:27.352: INFO: pod-b8564c48-8aa2-4bdb-a5eb-256f8afd47cc started at <nil> (0+0 container statuses recorded) May 17 14:16:27.352: INFO: concurrent-27546615-rw8md started at 2022-05-17 14:15:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.352: INFO: Container c ready: true, restart count 0 May 17 14:16:27.352: INFO: hostexec-kind-worker-kbzwd started at <nil> (0+0 container statuses recorded) May 17 14:16:27.352: INFO: service-headless-tgqlh started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.352: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:27.352: INFO: deployment-55649fd747-f5bss started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.352: INFO: Container nginx ready: false, restart count 0 May 17 14:16:27.352: INFO: pod-terminate-status-2-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:27.352: INFO: service-proxy-disabled-qk4md started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.352: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:27.352: INFO: busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 started at 2022-05-17 14:15:24 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.352: INFO: Container busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 ready: true, restart count 0 May 17 14:16:27.352: INFO: pod-qos-class-5c206c25-c993-4e12-9d89-06931c05bb47 started at 2022-05-17 14:16:07 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.352: INFO: Container agnhost ready: false, restart count 0 May 17 14:16:27.352: INFO: inline-volume-tester-h6pnw started at <nil> (0+0 container statuses recorded) May 17 14:16:27.352: INFO: frontend-685fc574d5-64jqm started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.352: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:27.352: INFO: service-headless-q5g89 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.353: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:27.353: INFO: service-headless-toggled-9zh79 started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.353: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:27.353: INFO: service-headless-toggled-nw4rn started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.353: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:27.353: INFO: kube-proxy-dr9f5 started at 2022-05-17 14:13:23 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.353: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:27.353: INFO: pod-terminate-status-0-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:29.003: INFO: Latency metrics for node kind-worker May 17 14:16:29.003: INFO: Logging node info for node kind-worker2 May 17 14:16:29.045: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 d009e97f-d73b-4ed0-8afd-9f776bcb704d 6998 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2591":"kind-worker2","csi-mock-csi-mock-volumes-8289":"csi-mock-csi-mock-volumes-8289"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-17 14:15:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:521433a178fd4494a7881394e8d644cf,SystemUUID:c86f292f-55cb-49c2-9dda-efb91a06dab9,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:29.045: INFO: Logging kubelet events for node kind-worker2 May 17 14:16:29.184: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 17 14:16:29.385: INFO: kube-proxy-sdmmb started at 2022-05-17 14:13:29 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:29.385: INFO: agnhost-replica-6bcf79b489-xd4h9 started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container replica ready: true, restart count 0 May 17 14:16:29.385: INFO: pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8 started at 2022-05-17 14:16:03 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:29.385: INFO: service-proxy-toggled-djh27 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:29.385: INFO: inline-volume-tester-tx6st started at 2022-05-17 14:13:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container csi-volume-tester ready: true, restart count 0 May 17 14:16:29.385: INFO: csi-mockplugin-0 started at 2022-05-17 14:13:58 +0000 UTC (0+3 container statuses recorded) May 17 14:16:29.385: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:29.385: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:29.385: INFO: Container mock ready: true, restart count 0 May 17 14:16:29.385: INFO: service-headless-toggled-dvx6b started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:29.385: INFO: hostexec-kind-worker2-kwzz9 started at 2022-05-17 14:15:35 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.385: INFO: kindnet-mq8qb started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:29.385: INFO: service-headless-9mh45 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:29.385: INFO: pod-a2ba308d-816a-406c-bf57-d5840e5d5387 started at 2022-05-17 14:15:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:29.385: INFO: affinity-nodeport-f2r7w started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:29.385: INFO: hostexec-kind-worker2-gh6gn started at 2022-05-17 14:15:19 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.385: INFO: pod-2d828178-7e03-4331-91b0-2165aa4eeee5 started at <nil> (0+0 container statuses recorded) May 17 14:16:29.385: INFO: csi-mockplugin-0 started at 2022-05-17 14:15:19 +0000 UTC (0+4 container statuses recorded) May 17 14:16:29.385: INFO: Container busybox ready: true, restart count 0 May 17 14:16:29.385: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:29.385: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:29.385: INFO: Container mock ready: true, restart count 0 May 17 14:16:29.385: INFO: hostexec-kind-worker2-k8cx7 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.385: INFO: pod-server-2 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.385: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:13:56 +0000 UTC (0+7 container statuses recorded) May 17 14:16:29.385: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:29.385: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:29.385: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:29.385: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:29.385: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:29.385: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:29.385: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:29.385: INFO: ss-0 started at 2022-05-17 14:15:25 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container webserver ready: true, restart count 0 May 17 14:16:29.385: INFO: pod-23740531-141e-4062-9ddb-69c15ab1b060 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:29.385: INFO: pvc-volume-tester-wh448 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container volume-tester ready: true, restart count 0 May 17 14:16:29.385: INFO: service-proxy-toggled-wrl7r started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:29.385: INFO: ss2-0 started at 2022-05-17 14:16:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container webserver ready: false, restart count 0 May 17 14:16:29.385: INFO: rs-vkdrm started at 2022-05-17 14:14:49 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container donothing ready: false, restart count 0 May 17 14:16:29.385: INFO: service-proxy-disabled-brdhb started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:29.385: INFO: service-proxy-disabled-xzkc8 started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:29.385: INFO: frontend-685fc574d5-lc6wl started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:29.385: INFO: frontend-685fc574d5-mj2wn started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:29.385: INFO: csi-mockplugin-attacher-0 started at 2022-05-17 14:13:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.385: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:30.718: INFO: Latency metrics for node kind-worker2 May 17 14:16:30.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "persistent-local-volumes-test-4290" for this suite.
Find pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8 mentions in log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sPersistentVolumes\-local\s\s\[Volume\stype\:\sdir\-link\-bindmounted\]\sOne\spod\srequesting\sone\sprebound\sPVC\sshould\sbe\sable\sto\smount\svolume\sand\sread\sfrom\spod1$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 May 17 14:16:22.833: Unexpected error: <*errors.StatusError | 0xc000528a00>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:110from junit_01.xml
[BeforeEach] [sig-storage] PersistentVolumes-local /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 17 14:15:55.930: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename persistent-local-volumes-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 �[1mSTEP�[0m: Initializing test volumes May 17 14:16:22.833: FAIL: Unexpected error: <*errors.StatusError | 0xc000528a00>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).launchNodeExecPod(0xc003768100, 0xc003a03655, 0xb, 0xb) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:110 +0x4b9 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).exec(0xc003768100, 0xc003aced80, 0x16a, 0xc002a68c00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:136 +0x3b5 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommandWithResult(0xc003768100, 0xc003aced80, 0x16a, 0xc002a68c00, 0x5, 0x16a, 0xc003aced80, 0xc0038eed00) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:169 +0x99 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommand(0xc003768100, 0xc003aced80, 0x16a, 0xc002a68c00, 0x5, 0xc003aced80) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:178 +0x49 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).setupLocalVolumeDirectoryLinkBindMounted(0xc004344cc0, 0xc002a68c00, 0x0, 0x7914d01) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:257 +0x216 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Create(0xc004344cc0, 0xc002a68c00, 0x70b01fd, 0x14, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:302 +0x4ed k8s.io/kubernetes/test/e2e/storage.setupLocalVolumes(0xc006506ea0, 0x70b01fd, 0x14, 0xc002a68c00, 0x1, 0x0, 0x0, 0xc0010c5e00) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:837 +0x157 k8s.io/kubernetes/test/e2e/storage.setupLocalVolumesPVCsPVs(0xc006506ea0, 0x70b01fd, 0x14, 0xc002a68c00, 0x1, 0x707c957, 0x9, 0x0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1102 +0x87 k8s.io/kubernetes/test/e2e/storage.glob..func21.2.1() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 +0xb6 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0002e7e00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0002e7e00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc0002e7e00, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [Volume type: dir-link-bindmounted] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 �[1mSTEP�[0m: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "persistent-local-volumes-test-2943". �[1mSTEP�[0m: Found 4 events. May 17 14:16:25.622: INFO: At 2022-05-17 14:15:56 +0000 UTC - event for hostexec-kind-worker-28fdh: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-2943/hostexec-kind-worker-28fdh to kind-worker May 17 14:16:25.622: INFO: At 2022-05-17 14:15:57 +0000 UTC - event for hostexec-kind-worker-28fdh: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 17 14:16:25.622: INFO: At 2022-05-17 14:15:58 +0000 UTC - event for hostexec-kind-worker-28fdh: {kubelet kind-worker} Created: Created container agnhost-container May 17 14:16:25.622: INFO: At 2022-05-17 14:15:58 +0000 UTC - event for hostexec-kind-worker-28fdh: {kubelet kind-worker} Started: Started container agnhost-container May 17 14:16:25.735: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:16:25.735: INFO: hostexec-kind-worker-28fdh kind-worker Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:56 +0000 UTC ContainersNotReady containers with unready status: [agnhost-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:56 +0000 UTC ContainersNotReady containers with unready status: [agnhost-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:56 +0000 UTC }] May 17 14:16:25.735: INFO: May 17 14:16:25.796: INFO: Logging node info for node kind-control-plane May 17 14:16:25.856: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 1fc1c875-88c2-4f79-b48f-f6ef32b124da 586 0 2022-05-17 14:12:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-17 14:12:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-17 14:12:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:13:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:97599c4ddadc4d3eae566d304d7d104d,SystemUUID:e786229d-3adc-40a9-a01d-f7a176ae9288,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:25.857: INFO: Logging kubelet events for node kind-control-plane May 17 14:16:25.983: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 17 14:16:26.056: INFO: kube-apiserver-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.056: INFO: Container kube-apiserver ready: true, restart count 0 May 17 14:16:26.056: INFO: kube-controller-manager-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.056: INFO: Container kube-controller-manager ready: false, restart count 0 May 17 14:16:26.056: INFO: kube-scheduler-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.056: INFO: Container kube-scheduler ready: false, restart count 0 May 17 14:16:26.056: INFO: kindnet-s6wb7 started at 2022-05-17 14:13:07 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.056: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:26.056: INFO: etcd-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.056: INFO: Container etcd ready: true, restart count 0 May 17 14:16:26.056: INFO: local-path-provisioner-6c9449b9dd-bllk2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.056: INFO: Container local-path-provisioner ready: true, restart count 0 May 17 14:16:26.056: INFO: coredns-78fcd69978-r97ls started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.056: INFO: Container coredns ready: true, restart count 0 May 17 14:16:26.056: INFO: kube-proxy-5tzjs started at 2022-05-17 14:13:26 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.056: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:26.056: INFO: coredns-78fcd69978-b6zn2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.056: INFO: Container coredns ready: true, restart count 0 May 17 14:16:26.571: INFO: Latency metrics for node kind-control-plane May 17 14:16:26.571: INFO: Logging node info for node kind-worker May 17 14:16:26.657: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5ba69ac9-ccd5-447d-9f27-9bee10b23121 6999 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9914":"kind-worker","csi-mock-csi-mock-volumes-7782":"csi-mock-csi-mock-volumes-7782"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-05-17 14:15:50 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-17 14:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7e8057f975041deb201a063c162c158,SystemUUID:fc207ccb-2e60-4380-b38c-671b606f274a,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155,DevicePath:,},},Config:nil,},} May 17 14:16:26.658: INFO: Logging kubelet events for node kind-worker May 17 14:16:26.769: INFO: Logging pods the kubelet thinks is on node kind-worker May 17 14:16:26.852: INFO: busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 started at 2022-05-17 14:15:24 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.852: INFO: Container busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 ready: true, restart count 0 May 17 14:16:26.852: INFO: pod-b8564c48-8aa2-4bdb-a5eb-256f8afd47cc started at <nil> (0+0 container statuses recorded) May 17 14:16:26.852: INFO: concurrent-27546615-rw8md started at 2022-05-17 14:15:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.852: INFO: Container c ready: true, restart count 0 May 17 14:16:26.852: INFO: hostexec-kind-worker-kbzwd started at <nil> (0+0 container statuses recorded) May 17 14:16:26.853: INFO: service-headless-tgqlh started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.853: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:26.853: INFO: deployment-55649fd747-f5bss started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.853: INFO: Container nginx ready: false, restart count 0 May 17 14:16:26.853: INFO: pod-terminate-status-2-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.853: INFO: service-proxy-disabled-qk4md started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.853: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:26.853: INFO: pod-qos-class-5c206c25-c993-4e12-9d89-06931c05bb47 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.853: INFO: service-headless-toggled-nw4rn started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.853: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:26.853: INFO: inline-volume-tester-h6pnw started at <nil> (0+0 container statuses recorded) May 17 14:16:26.853: INFO: frontend-685fc574d5-64jqm started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.853: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:26.853: INFO: service-headless-q5g89 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.853: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:26.853: INFO: service-headless-toggled-9zh79 started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.853: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:26.853: INFO: kube-proxy-dr9f5 started at 2022-05-17 14:13:23 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.853: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:26.853: INFO: pod-terminate-status-0-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.853: INFO: concurrent-27546616-5t86w started at <nil> (0+0 container statuses recorded) May 17 14:16:26.853: INFO: hostexec-kind-worker-gzkkg started at 2022-05-17 14:15:51 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.853: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.853: INFO: hostexec-kind-worker-28fdh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.853: INFO: Container agnhost-container ready: false, restart count 0 May 17 14:16:26.853: INFO: pod-client started at 2022-05-17 14:15:09 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.853: INFO: Container pod-client ready: true, restart count 0 May 17 14:16:26.853: INFO: liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931 started at 2022-05-17 14:14:13 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.853: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.853: INFO: kindnet-56p79 started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.853: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:26.853: INFO: csi-mockplugin-0 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.853: INFO: implicit-nonroot-uid started at <nil> (0+0 container statuses recorded) May 17 14:16:26.853: INFO: ss-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.853: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:15:44 +0000 UTC (0+7 container statuses recorded) May 17 14:16:26.853: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:26.853: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:26.853: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:26.853: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:26.853: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:26.853: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:26.853: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:26.853: INFO: affinity-nodeport-xj24q started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.853: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:26.853: INFO: deployment-55649fd747-6dbnz started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.853: INFO: Container nginx ready: false, restart count 0 May 17 14:16:26.853: INFO: service-proxy-toggled-mlq7q started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.853: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:26.853: INFO: hostexec-kind-worker-6t6pq started at 2022-05-17 14:15:52 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.853: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:26.853: INFO: agnhost-primary-5db8ddd565-ktltd started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.853: INFO: Container primary ready: true, restart count 0 May 17 14:16:26.853: INFO: send-events-4adc3414-703f-4b62-a42c-926d4b4b6fc2 started at <nil> (0+0 container statuses recorded) May 17 14:16:26.853: INFO: termination-message-container5de5961b-b8bb-4868-adb0-b23802f044ae started at <nil> (0+0 container statuses recorded) May 17 14:16:26.853: INFO: agnhost-replica-6bcf79b489-885sh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.853: INFO: Container replica ready: true, restart count 0 May 17 14:16:28.733: INFO: Latency metrics for node kind-worker May 17 14:16:28.733: INFO: Logging node info for node kind-worker2 May 17 14:16:28.742: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 d009e97f-d73b-4ed0-8afd-9f776bcb704d 6998 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2591":"kind-worker2","csi-mock-csi-mock-volumes-8289":"csi-mock-csi-mock-volumes-8289"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-17 14:15:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:521433a178fd4494a7881394e8d644cf,SystemUUID:c86f292f-55cb-49c2-9dda-efb91a06dab9,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:28.743: INFO: Logging kubelet events for node kind-worker2 May 17 14:16:28.766: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 17 14:16:29.051: INFO: kindnet-mq8qb started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.051: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:29.051: INFO: service-headless-9mh45 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.051: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:29.051: INFO: pod-a2ba308d-816a-406c-bf57-d5840e5d5387 started at 2022-05-17 14:15:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.051: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:29.051: INFO: affinity-nodeport-f2r7w started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.051: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:29.051: INFO: hostexec-kind-worker2-kwzz9 started at 2022-05-17 14:15:35 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.051: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.051: INFO: hostexec-kind-worker2-gh6gn started at 2022-05-17 14:15:19 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.051: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.051: INFO: pod-2d828178-7e03-4331-91b0-2165aa4eeee5 started at <nil> (0+0 container statuses recorded) May 17 14:16:29.052: INFO: csi-mockplugin-0 started at 2022-05-17 14:15:19 +0000 UTC (0+4 container statuses recorded) May 17 14:16:29.052: INFO: Container busybox ready: true, restart count 0 May 17 14:16:29.052: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:29.052: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:29.052: INFO: Container mock ready: true, restart count 0 May 17 14:16:29.052: INFO: hostexec-kind-worker2-k8cx7 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.052: INFO: pod-server-2 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.052: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:13:56 +0000 UTC (0+7 container statuses recorded) May 17 14:16:29.052: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:29.052: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:29.052: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:29.052: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:29.052: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:29.052: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:29.052: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:29.052: INFO: ss-0 started at 2022-05-17 14:15:25 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container webserver ready: true, restart count 0 May 17 14:16:29.052: INFO: pod-23740531-141e-4062-9ddb-69c15ab1b060 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:29.052: INFO: pvc-volume-tester-wh448 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container volume-tester ready: true, restart count 0 May 17 14:16:29.052: INFO: service-proxy-toggled-wrl7r started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:29.052: INFO: ss2-0 started at 2022-05-17 14:16:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container webserver ready: false, restart count 0 May 17 14:16:29.052: INFO: rs-vkdrm started at 2022-05-17 14:14:49 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container donothing ready: false, restart count 0 May 17 14:16:29.052: INFO: service-proxy-disabled-brdhb started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:29.052: INFO: pod-e8401689-4cef-4e20-9f2e-264844f9d704 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container write-pod ready: false, restart count 0 May 17 14:16:29.052: INFO: service-proxy-disabled-xzkc8 started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:29.052: INFO: frontend-685fc574d5-lc6wl started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:29.052: INFO: frontend-685fc574d5-mj2wn started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:29.052: INFO: csi-mockplugin-attacher-0 started at 2022-05-17 14:13:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:29.052: INFO: kube-proxy-sdmmb started at 2022-05-17 14:13:29 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:29.052: INFO: agnhost-replica-6bcf79b489-xd4h9 started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container replica ready: true, restart count 0 May 17 14:16:29.052: INFO: pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8 started at 2022-05-17 14:16:03 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:29.052: INFO: inline-volume-tester-tx6st started at 2022-05-17 14:13:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container csi-volume-tester ready: true, restart count 0 May 17 14:16:29.052: INFO: csi-mockplugin-0 started at 2022-05-17 14:13:58 +0000 UTC (0+3 container statuses recorded) May 17 14:16:29.052: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:29.052: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:29.052: INFO: Container mock ready: true, restart count 0 May 17 14:16:29.052: INFO: service-headless-toggled-dvx6b started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:29.052: INFO: service-proxy-toggled-djh27 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.052: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:30.924: INFO: Latency metrics for node kind-worker2 May 17 14:16:30.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "persistent-local-volumes-test-2943" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sPersistentVolumes\-local\s\s\[Volume\stype\:\sdir\-link\]\sTwo\spods\smounting\sa\slocal\svolume\sat\sthe\ssame\stime\sshould\sbe\sable\sto\swrite\sfrom\spod1\sand\sread\sfrom\spod2$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 May 17 14:16:22.842: Unexpected error: <*errors.errorString | 0xc00409f170>: { s: "pod \"pod-2d828178-7e03-4331-91b0-2165aa4eeee5\" is not Running: etcdserver: request timed out", } pod "pod-2d828178-7e03-4331-91b0-2165aa4eeee5" is not Running: etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:749from junit_15.xml
[BeforeEach] [sig-storage] PersistentVolumes-local /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 17 14:15:18.861: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename persistent-local-volumes-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 �[1mSTEP�[0m: Initializing test volumes May 17 14:15:37.166: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-cf16f195-52be-42b5-b0bf-f85254123a5f-backend && ln -s /tmp/local-volume-test-cf16f195-52be-42b5-b0bf-f85254123a5f-backend /tmp/local-volume-test-cf16f195-52be-42b5-b0bf-f85254123a5f] Namespace:persistent-local-volumes-test-800 PodName:hostexec-kind-worker2-gh6gn ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 17 14:15:37.166: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Creating local PVCs and PVs May 17 14:15:37.312: INFO: Creating a PV followed by a PVC May 17 14:15:37.333: INFO: Waiting for PV local-pvqvxfd to bind to PVC pvc-4vgtn May 17 14:15:37.333: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-4vgtn] to have phase Bound May 17 14:15:37.363: INFO: PersistentVolumeClaim pvc-4vgtn found but phase is Pending instead of Bound. May 17 14:15:39.379: INFO: PersistentVolumeClaim pvc-4vgtn found but phase is Pending instead of Bound. May 17 14:15:41.398: INFO: PersistentVolumeClaim pvc-4vgtn found but phase is Pending instead of Bound. May 17 14:15:43.454: INFO: PersistentVolumeClaim pvc-4vgtn found but phase is Pending instead of Bound. May 17 14:15:45.464: INFO: PersistentVolumeClaim pvc-4vgtn found but phase is Pending instead of Bound. May 17 14:15:47.469: INFO: PersistentVolumeClaim pvc-4vgtn found but phase is Pending instead of Bound. May 17 14:15:49.473: INFO: PersistentVolumeClaim pvc-4vgtn found but phase is Pending instead of Bound. May 17 14:15:51.520: INFO: PersistentVolumeClaim pvc-4vgtn found but phase is Pending instead of Bound. May 17 14:15:53.533: INFO: PersistentVolumeClaim pvc-4vgtn found and phase=Bound (16.19978227s) May 17 14:15:53.533: INFO: Waiting up to 3m0s for PersistentVolume local-pvqvxfd to have phase Bound May 17 14:15:53.537: INFO: PersistentVolume local-pvqvxfd found and phase=Bound (3.503792ms) [It] should be able to write from pod1 and read from pod2 /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 �[1mSTEP�[0m: Creating pod1 to write to the PV �[1mSTEP�[0m: Creating a pod May 17 14:16:05.667: INFO: pod "pod-23740531-141e-4062-9ddb-69c15ab1b060" created on Node "kind-worker2" �[1mSTEP�[0m: Writing in pod1 May 17 14:16:05.667: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-800 PodName:pod-23740531-141e-4062-9ddb-69c15ab1b060 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 17 14:16:05.667: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 17 14:16:05.859: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: <nil> May 17 14:16:05.859: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-800 PodName:pod-23740531-141e-4062-9ddb-69c15ab1b060 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 17 14:16:05.859: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 17 14:16:06.097: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: <nil> �[1mSTEP�[0m: Creating pod2 to read from the PV �[1mSTEP�[0m: Creating a pod May 17 14:16:22.842: FAIL: Unexpected error: <*errors.errorString | 0xc00409f170>: { s: "pod \"pod-2d828178-7e03-4331-91b0-2165aa4eeee5\" is not Running: etcdserver: request timed out", } pod "pod-2d828178-7e03-4331-91b0-2165aa4eeee5" is not Running: etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.twoPodsReadWriteTest(0xc0035949a0, 0xc002f5eea0, 0xc0035ae7b0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:749 +0x2d6 k8s.io/kubernetes/test/e2e/storage.glob..func21.2.4.1() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250 +0x45 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000a0a480) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000a0a480) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000a0a480, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [Volume type: dir-link] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 �[1mSTEP�[0m: Cleaning up PVC and PV May 17 14:16:22.843: INFO: Deleting PersistentVolumeClaim "pvc-4vgtn" May 17 14:16:25.561: INFO: Deleting PersistentVolume "local-pvqvxfd" �[1mSTEP�[0m: Removing the test directory May 17 14:16:25.693: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cf16f195-52be-42b5-b0bf-f85254123a5f && rm -r /tmp/local-volume-test-cf16f195-52be-42b5-b0bf-f85254123a5f-backend] Namespace:persistent-local-volumes-test-800 PodName:hostexec-kind-worker2-gh6gn ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 17 14:16:25.693: INFO: >>> kubeConfig: /root/.kube/kind-test-config [AfterEach] [sig-storage] PersistentVolumes-local /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "persistent-local-volumes-test-800". �[1mSTEP�[0m: Found 11 events. May 17 14:16:26.211: INFO: At 2022-05-17 14:15:19 +0000 UTC - event for hostexec-kind-worker2-gh6gn: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-800/hostexec-kind-worker2-gh6gn to kind-worker2 May 17 14:16:26.211: INFO: At 2022-05-17 14:15:21 +0000 UTC - event for hostexec-kind-worker2-gh6gn: {kubelet kind-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 17 14:16:26.211: INFO: At 2022-05-17 14:15:21 +0000 UTC - event for hostexec-kind-worker2-gh6gn: {kubelet kind-worker2} Created: Created container agnhost-container May 17 14:16:26.211: INFO: At 2022-05-17 14:15:21 +0000 UTC - event for hostexec-kind-worker2-gh6gn: {kubelet kind-worker2} Started: Started container agnhost-container May 17 14:16:26.211: INFO: At 2022-05-17 14:15:37 +0000 UTC - event for pvc-4vgtn: {persistentvolume-controller } ProvisioningFailed: no volume plugin matched name: kubernetes.io/no-provisioner May 17 14:16:26.211: INFO: At 2022-05-17 14:15:53 +0000 UTC - event for pod-23740531-141e-4062-9ddb-69c15ab1b060: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-800/pod-23740531-141e-4062-9ddb-69c15ab1b060 to kind-worker2 May 17 14:16:26.211: INFO: At 2022-05-17 14:15:56 +0000 UTC - event for pod-23740531-141e-4062-9ddb-69c15ab1b060: {kubelet kind-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine May 17 14:16:26.211: INFO: At 2022-05-17 14:15:57 +0000 UTC - event for pod-23740531-141e-4062-9ddb-69c15ab1b060: {kubelet kind-worker2} Created: Created container write-pod May 17 14:16:26.211: INFO: At 2022-05-17 14:15:57 +0000 UTC - event for pod-23740531-141e-4062-9ddb-69c15ab1b060: {kubelet kind-worker2} Started: Started container write-pod May 17 14:16:26.211: INFO: At 2022-05-17 14:16:06 +0000 UTC - event for pod-2d828178-7e03-4331-91b0-2165aa4eeee5: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-800/pod-2d828178-7e03-4331-91b0-2165aa4eeee5 to kind-worker2 May 17 14:16:26.211: INFO: At 2022-05-17 14:16:08 +0000 UTC - event for pod-2d828178-7e03-4331-91b0-2165aa4eeee5: {kubelet kind-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine May 17 14:16:26.263: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:16:26.263: INFO: hostexec-kind-worker2-gh6gn kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:19 +0000 UTC }] May 17 14:16:26.263: INFO: pod-23740531-141e-4062-9ddb-69c15ab1b060 kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:15:53 +0000 UTC }] May 17 14:16:26.263: INFO: pod-2d828178-7e03-4331-91b0-2165aa4eeee5 kind-worker2 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-17 14:16:06 +0000 UTC }] May 17 14:16:26.263: INFO: May 17 14:16:26.299: INFO: Logging node info for node kind-control-plane May 17 14:16:26.374: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 1fc1c875-88c2-4f79-b48f-f6ef32b124da 586 0 2022-05-17 14:12:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-17 14:12:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-17 14:12:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-17 14:13:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:12:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:13:20 +0000 UTC,LastTransitionTime:2022-05-17 14:13:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:97599c4ddadc4d3eae566d304d7d104d,SystemUUID:e786229d-3adc-40a9-a01d-f7a176ae9288,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:26.374: INFO: Logging kubelet events for node kind-control-plane May 17 14:16:26.478: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 17 14:16:26.567: INFO: kube-scheduler-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.567: INFO: Container kube-scheduler ready: false, restart count 0 May 17 14:16:26.567: INFO: kindnet-s6wb7 started at 2022-05-17 14:13:07 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.567: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:26.567: INFO: etcd-kind-control-plane started at 2022-05-17 14:13:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.567: INFO: Container etcd ready: true, restart count 0 May 17 14:16:26.567: INFO: kube-apiserver-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.567: INFO: Container kube-apiserver ready: true, restart count 0 May 17 14:16:26.567: INFO: kube-controller-manager-kind-control-plane started at 2022-05-17 14:13:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.567: INFO: Container kube-controller-manager ready: false, restart count 0 May 17 14:16:26.567: INFO: kube-proxy-5tzjs started at 2022-05-17 14:13:26 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.567: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:26.567: INFO: coredns-78fcd69978-b6zn2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.567: INFO: Container coredns ready: true, restart count 0 May 17 14:16:26.567: INFO: local-path-provisioner-6c9449b9dd-bllk2 started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.567: INFO: Container local-path-provisioner ready: true, restart count 0 May 17 14:16:26.567: INFO: coredns-78fcd69978-r97ls started at 2022-05-17 14:13:20 +0000 UTC (0+1 container statuses recorded) May 17 14:16:26.567: INFO: Container coredns ready: true, restart count 0 May 17 14:16:27.174: INFO: Latency metrics for node kind-control-plane May 17 14:16:27.174: INFO: Logging node info for node kind-worker May 17 14:16:27.241: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5ba69ac9-ccd5-447d-9f27-9bee10b23121 6999 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9914":"kind-worker","csi-mock-csi-mock-volumes-7782":"csi-mock-csi-mock-volumes-7782"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-05-17 14:15:50 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-17 14:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:56 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7e8057f975041deb201a063c162c158,SystemUUID:fc207ccb-2e60-4380-b38c-671b606f274a,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9914^da35d3ba-d5eb-11ec-918a-225d08066155,DevicePath:,},},Config:nil,},} May 17 14:16:27.242: INFO: Logging kubelet events for node kind-worker May 17 14:16:27.294: INFO: Logging pods the kubelet thinks is on node kind-worker May 17 14:16:27.375: INFO: service-proxy-disabled-qk4md started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:27.376: INFO: busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 started at 2022-05-17 14:15:24 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container busybox-readonly-fs83fb0cb9-60d4-419e-8e63-6beee25ccd06 ready: true, restart count 0 May 17 14:16:27.376: INFO: pod-b8564c48-8aa2-4bdb-a5eb-256f8afd47cc started at <nil> (0+0 container statuses recorded) May 17 14:16:27.376: INFO: concurrent-27546615-rw8md started at 2022-05-17 14:15:00 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container c ready: true, restart count 0 May 17 14:16:27.376: INFO: hostexec-kind-worker-kbzwd started at <nil> (0+0 container statuses recorded) May 17 14:16:27.376: INFO: service-headless-tgqlh started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:27.376: INFO: deployment-55649fd747-f5bss started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container nginx ready: false, restart count 0 May 17 14:16:27.376: INFO: pod-terminate-status-2-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:27.376: INFO: pod-qos-class-5c206c25-c993-4e12-9d89-06931c05bb47 started at 2022-05-17 14:16:07 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container agnhost ready: false, restart count 0 May 17 14:16:27.376: INFO: service-headless-toggled-9zh79 started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:27.376: INFO: service-headless-toggled-nw4rn started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:27.376: INFO: inline-volume-tester-h6pnw started at <nil> (0+0 container statuses recorded) May 17 14:16:27.376: INFO: frontend-685fc574d5-64jqm started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:27.376: INFO: service-headless-q5g89 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:27.376: INFO: kube-proxy-dr9f5 started at 2022-05-17 14:13:23 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:27.376: INFO: pod-terminate-status-0-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:27.376: INFO: liveness-7a123bfb-f826-4ce3-ad61-ebc03b5bc931 started at 2022-05-17 14:14:13 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:27.376: INFO: concurrent-27546616-5t86w started at <nil> (0+0 container statuses recorded) May 17 14:16:27.376: INFO: hostexec-kind-worker-gzkkg started at 2022-05-17 14:15:51 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:27.376: INFO: hostexec-kind-worker-28fdh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container agnhost-container ready: false, restart count 0 May 17 14:16:27.376: INFO: pod-client started at 2022-05-17 14:15:09 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container pod-client ready: true, restart count 0 May 17 14:16:27.376: INFO: ss-1 started at <nil> (0+0 container statuses recorded) May 17 14:16:27.376: INFO: kindnet-56p79 started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:27.376: INFO: csi-mockplugin-0 started at <nil> (0+0 container statuses recorded) May 17 14:16:27.376: INFO: implicit-nonroot-uid started at <nil> (0+0 container statuses recorded) May 17 14:16:27.376: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:15:44 +0000 UTC (0+7 container statuses recorded) May 17 14:16:27.376: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:27.376: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:27.376: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:27.376: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:27.376: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:27.376: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:27.376: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:27.376: INFO: affinity-nodeport-xj24q started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:27.376: INFO: deployment-55649fd747-6dbnz started at 2022-05-17 14:14:10 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container nginx ready: false, restart count 0 May 17 14:16:27.376: INFO: termination-message-container5de5961b-b8bb-4868-adb0-b23802f044ae started at <nil> (0+0 container statuses recorded) May 17 14:16:27.376: INFO: service-proxy-toggled-mlq7q started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:27.376: INFO: hostexec-kind-worker-6t6pq started at 2022-05-17 14:15:52 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:27.376: INFO: agnhost-primary-5db8ddd565-ktltd started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container primary ready: true, restart count 0 May 17 14:16:27.376: INFO: send-events-4adc3414-703f-4b62-a42c-926d4b4b6fc2 started at <nil> (0+0 container statuses recorded) May 17 14:16:27.376: INFO: agnhost-replica-6bcf79b489-885sh started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:27.376: INFO: Container replica ready: true, restart count 0 May 17 14:16:28.765: INFO: Latency metrics for node kind-worker May 17 14:16:28.765: INFO: Logging node info for node kind-worker2 May 17 14:16:28.820: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 d009e97f-d73b-4ed0-8afd-9f776bcb704d 6998 0 2022-05-17 14:13:16 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2591":"kind-worker2","csi-mock-csi-mock-volumes-8289":"csi-mock-csi-mock-volumes-8289"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubeadm Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-17 14:13:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-17 14:15:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-17 14:15:47 +0000 UTC,LastTransitionTime:2022-05-17 14:13:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:521433a178fd4494a7881394e8d644cf,SystemUUID:c86f292f-55cb-49c2-9dda-efb91a06dab9,BootID:74a98684-362f-4056-ae8e-c98add53e9d2,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.19+64ad393c62a2c5,KubeProxyVersion:v1.22.10-rc.0.19+64ad393c62a2c5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:5963165f44b226e3bcc74555b40074fc3b47b362fc2fce35ac184854a283aa9a k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:63e87a9aad6a6666e3f6f4b9aaee92307a9390cfc1e41a029ceca9f6c4439478 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:59073d1c8ff506ad93a224d17df309f7bbbc344acb535ac430fef15e3866f973 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-17@sha256:7f13d8eb6f1182786e7080a1f94ea5d4cf6873534c2d47e79c444c8216f823aa k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.19_64ad393c62a2c5],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 17 14:16:28.821: INFO: Logging kubelet events for node kind-worker2 May 17 14:16:28.875: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 17 14:16:29.034: INFO: csi-hostpathplugin-0 started at 2022-05-17 14:13:56 +0000 UTC (0+7 container statuses recorded) May 17 14:16:29.034: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:29.034: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:29.034: INFO: Container csi-resizer ready: true, restart count 0 May 17 14:16:29.034: INFO: Container csi-snapshotter ready: true, restart count 0 May 17 14:16:29.034: INFO: Container hostpath ready: true, restart count 0 May 17 14:16:29.034: INFO: Container liveness-probe ready: true, restart count 0 May 17 14:16:29.034: INFO: Container node-driver-registrar ready: true, restart count 0 May 17 14:16:29.034: INFO: ss-0 started at 2022-05-17 14:15:25 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container webserver ready: true, restart count 0 May 17 14:16:29.034: INFO: pod-23740531-141e-4062-9ddb-69c15ab1b060 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:29.034: INFO: rs-vkdrm started at 2022-05-17 14:14:49 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container donothing ready: false, restart count 0 May 17 14:16:29.034: INFO: pvc-volume-tester-wh448 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container volume-tester ready: true, restart count 0 May 17 14:16:29.034: INFO: service-proxy-toggled-wrl7r started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:29.034: INFO: ss2-0 started at 2022-05-17 14:16:01 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container webserver ready: false, restart count 0 May 17 14:16:29.034: INFO: frontend-685fc574d5-lc6wl started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:29.034: INFO: frontend-685fc574d5-mj2wn started at 2022-05-17 14:15:55 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container guestbook-frontend ready: true, restart count 0 May 17 14:16:29.034: INFO: service-proxy-disabled-brdhb started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:29.034: INFO: pod-e8401689-4cef-4e20-9f2e-264844f9d704 started at 2022-05-17 14:15:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container write-pod ready: false, restart count 0 May 17 14:16:29.034: INFO: service-proxy-disabled-xzkc8 started at 2022-05-17 14:15:32 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container service-proxy-disabled ready: true, restart count 0 May 17 14:16:29.034: INFO: csi-mockplugin-attacher-0 started at 2022-05-17 14:13:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container csi-attacher ready: true, restart count 0 May 17 14:16:29.034: INFO: kube-proxy-sdmmb started at 2022-05-17 14:13:29 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:16:29.034: INFO: agnhost-replica-6bcf79b489-xd4h9 started at 2022-05-17 14:15:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container replica ready: true, restart count 0 May 17 14:16:29.034: INFO: pod-9ec2f9c5-e367-4a4f-8edf-f159344127d8 started at 2022-05-17 14:16:03 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:29.034: INFO: service-proxy-toggled-djh27 started at 2022-05-17 14:15:47 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container service-proxy-toggled ready: true, restart count 0 May 17 14:16:29.034: INFO: inline-volume-tester-tx6st started at 2022-05-17 14:13:56 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container csi-volume-tester ready: true, restart count 0 May 17 14:16:29.034: INFO: csi-mockplugin-0 started at 2022-05-17 14:13:58 +0000 UTC (0+3 container statuses recorded) May 17 14:16:29.034: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:29.034: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:29.034: INFO: Container mock ready: true, restart count 0 May 17 14:16:29.034: INFO: service-headless-toggled-dvx6b started at 2022-05-17 14:14:08 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container service-headless-toggled ready: true, restart count 0 May 17 14:16:29.034: INFO: affinity-nodeport-f2r7w started at 2022-05-17 14:15:27 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container affinity-nodeport ready: true, restart count 0 May 17 14:16:29.034: INFO: hostexec-kind-worker2-kwzz9 started at 2022-05-17 14:15:35 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.034: INFO: kindnet-mq8qb started at 2022-05-17 14:13:17 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:16:29.034: INFO: service-headless-9mh45 started at 2022-05-17 14:13:53 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container service-headless ready: true, restart count 0 May 17 14:16:29.034: INFO: pod-a2ba308d-816a-406c-bf57-d5840e5d5387 started at 2022-05-17 14:15:58 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container write-pod ready: true, restart count 0 May 17 14:16:29.034: INFO: hostexec-kind-worker2-k8cx7 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.034: INFO: pod-server-2 started at 2022-05-17 14:15:45 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.034: INFO: hostexec-kind-worker2-gh6gn started at 2022-05-17 14:15:19 +0000 UTC (0+1 container statuses recorded) May 17 14:16:29.034: INFO: Container agnhost-container ready: true, restart count 0 May 17 14:16:29.034: INFO: pod-2d828178-7e03-4331-91b0-2165aa4eeee5 started at <nil> (0+0 container statuses recorded) May 17 14:16:29.034: INFO: csi-mockplugin-0 started at 2022-05-17 14:15:19 +0000 UTC (0+4 container statuses recorded) May 17 14:16:29.035: INFO: Container busybox ready: true, restart count 0 May 17 14:16:29.035: INFO: Container csi-provisioner ready: true, restart count 0 May 17 14:16:29.035: INFO: Container driver-registrar ready: true, restart count 0 May 17 14:16:29.035: INFO: Container mock ready: true, restart count 0 May 17 14:16:30.841: INFO: Latency metrics for node kind-worker2 May 17 14:16:30.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "persistent-local-volumes-test-800" for this suite.
Find pod-2d828178-7e03-4331-91b0-2165aa4eeee5 mentions in log files
Kubernetes e2e suite [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
Kubernetes e2e suite [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
Kubernetes e2e suite [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should work for CRDs
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should work for subresources
Kubernetes e2e suite [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [sig-api-machinery] health handlers should contain necessary checks
Kubernetes e2e suite [sig-api-machinery] server version should find the server version [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should delete failed finished jobs with limit of one job
Kubernetes e2e suite [sig-apps] CronJob should delete successful finished jobs with limit of one successful job
Kubernetes e2e suite [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should support CronJob API operations [Conformance]
Kubernetes e2e suite [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
Kubernetes e2e suite [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]
Kubernetes e2e suite [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]
Kubernetes e2e suite [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
Kubernetes e2e suite [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
Kubernetes e2e suite [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname
Kubernetes e2e suite [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [sig-apps] Job should delete pods when suspended
Kubernetes e2e suite [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [sig-apps] Job should not create pods when created in suspend state
Kubernetes e2e suite [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
Kubernetes e2e suite [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
Kubernetes e2e suite [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR
Kubernetes e2e suite [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should ensure a single API token exists
Kubernetes e2e suite [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply apply set/view last-applied
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should contain last line of the log
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should handle in-cluster config
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes running a successful command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec using resource/name
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support port-forward
Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete
Kubernetes e2e suite [sig-instrumentation] Events API should delete a collection of events [Conformance]
Kubernetes e2e suite [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [sig-instrumentation] Events should delete a collection of events [Conformance]
Kubernetes e2e suite [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from API server.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
Kubernetes e2e suite [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
Kubernetes e2e suite [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
Kubernetes e2e suite [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service
Kubernetes e2e suite [sig-network] Conntrack should drop INVALID conntrack entries
Kubernetes e2e suite [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for ExternalName services [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for services [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for the cluster [Conformance]
Kubernetes e2e suite [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
Kubernetes e2e suite [sig-network] DNS should support configurable pod DNS nameservers [Conformance]
Kubernetes e2e suite [sig-network] DNS should support configurable pod resolv.conf
Kubernetes e2e suite [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
Kubernetes e2e suite [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
Kubernetes e2e suite [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]
Kubernetes e2e suite [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]
Kubernetes e2e suite [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]
Kubernetes e2e suite [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Ingress API should support creating Ingress API operations [Conformance]
Kubernetes e2e suite [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]
Kubernetes e2e suite [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]
Kubernetes e2e suite [sig-network] Netpol API should support creating NetworkPolicy API operations
Kubernetes e2e suite [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should be able to handle large requests: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update endpoints: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update endpoints: udp
Kubernetes e2e suite [sig-network] Networking should check kube-proxy urls
Kubernetes e2e suite [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
Kubernetes e2e suite [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy logs on node using proxy subresource
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
Kubernetes e2e suite [sig-network] Service endpoints latency should not be very high [Conformance]
Kubernetes e2e suite [sig-network] Services should allow pods to hairpin back to themselves through services
Kubernetes e2e suite [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to up and down services
Kubernetes e2e suite [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols
Kubernetes e2e suite [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
Kubernetes e2e suite [sig-network] Services should check NodePort out-of-range
Kubernetes e2e suite [sig-network] Services should complete a service status lifecycle [Conformance]
Kubernetes e2e suite [sig-network] Services should create endpoints for unready pods
Kubernetes e2e suite [sig-network] Services should find a service from listing all namespaces [Conformance]
Kubernetes e2e suite [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should implement service.kubernetes.io/headless
Kubernetes e2e suite [sig-network] Services should implement service.kubernetes.io/service-proxy-name
Kubernetes e2e suite [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should prevent NodePort collisions
Kubernetes e2e suite [sig-network] Services should provide secure master service [Conformance]
Kubernetes e2e suite [sig-network] Services should release NodePorts on delete
Kubernetes e2e suite [sig-network] Services should serve a basic endpoint from pods [Conformance]
Kubernetes e2e suite [sig-network] Services should serve multiport endpoints from pods [Conformance]
Kubernetes e2e suite [sig-network] Services should test the lifecycle of an Endpoint [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should update ConfigMap successfully
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
Kubernetes e2e suite [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Lease lease API should be available [Conformance]
Kubernetes e2e suite [sig-node] Mount propagation should propagate mounts within defined scopes
Kubernetes e2e suite [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set
Kubernetes e2e suite [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
Kubernetes e2e suite [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
Kubernetes e2e suite [sig-node] PodTemplates should delete a collection of pod templates [Conformance]
Kubernetes e2e suite [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
Kubernetes e2e suite [sig-node] Pods Extended Delete Grace Period should be submitted and removed
Kubernetes e2e suite [sig-node] Pods Extended Pod Container Status should never report container start when an init container fails
Kubernetes e2e suite [sig-node] Pods Extended Pod Container Status should never report success for a pending container
Kubernetes e2e suite [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done
Kubernetes e2e suite [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
Kubernetes e2e suite [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should delete a collection of pods [Conformance]
Kubernetes e2e suite [sig-node] Pods should get a host IP [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]
Kubernetes e2e suite [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
Kubernetes e2e suite [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process
Kubernetes e2e suite [sig-node] PreStop should call prestop when killing a pod [Conformance]
Kubernetes e2e suite [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe
Kubernetes e2e suite [sig-node] Probing container should be ready immediately after startupProbe succeeds
Kubernetes e2e suite [sig-node] Probing container should be restarted by liveness probe after startup probe enables it
Kubernetes e2e suite [sig-node] Probing container should be restarted startup probe fails
Kubernetes e2e suite [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout
Kubernetes e2e suite [sig-node] Probing container should be restarted with a local redirect http liveness probe
Kubernetes e2e suite [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints
Kubernetes e2e suite [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]
Kubernetes e2e suite [sig-node] Secrets should patch a secret [Conformance]
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
Kubernetes e2e suite [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21]
Kubernetes e2e suite [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]
Kubernetes e2e suite [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
Kubernetes e2e suite [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which share the same volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File
Kubernetes e2e suite [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment
Kubernetes e2e suite [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil
Kubernetes e2e suite [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true
Kubernetes e2e suite [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed
Kubernetes e2e suite [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity
Kubernetes e2e suite [sig-storage] CSI mock volume storage capacity exhausted, immediate binding
Kubernetes e2e suite [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology
Kubernetes e2e suite [sig-storage] CSI mock volume storage capacity unlimited
Kubernetes e2e suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]
Kubernetes e2e suite [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size
Kubernetes e2e suite [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret
Kubernetes e2e suite [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-storage] HostPath should support r/w [NodeConformance]
Kubernetes e2e suite [sig-storage] HostPath should support subPath [NodeConformance]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] PV Protection Verify "immediate" deletion of a PV that is not bound to a PVC
Kubernetes e2e suite [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately
Kubernetes e2e suite [sig-storage] PVC Protection Verify "immediate" deletion of a PVC that is not in active use by a pod
Kubernetes e2e suite [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately
Kubernetes e2e suite [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
Kubernetes e2e suite [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity
Kubernetes e2e suite [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified
Kubernetes e2e suite [sig-storage] Volumes ConfigMap should be mountable
Kubernetes e2e suite [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (fairness)
Kubernetes e2e suite [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (priority)
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [Flaky] kubectl explain works for CR with the same resource name as built-in object.
Kubernetes e2e suite [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
Kubernetes e2e suite [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should surge pods onto nodes when spec was updated and update strategy is RollingUpdate
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Kube-proxy should recover after being killed accidentally
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial]
Kubernetes e2e suite [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial]
Kubernetes e2e suite [sig-apps] Job should run a job to completion with CPU requests [Serial]
Kubernetes e2e suite [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [sig-apps] StatefulSet MinReadySeconds should be honored when enabled [Feature:StatefulSetMinReadySeconds] [alpha]
Kubernetes e2e suite [sig-apps] [Feature:TTLAfterFinished] job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [sig-auth] Metadata Concealment should run a check-metadata-concealment job to completion
Kubernetes e2e suite [sig-auth] PodSecurityPolicy [Feature:PodSecurityPolicy] should allow pods under the privileged policy.PodSecurityPolicy
Kubernetes e2e suite [sig-auth] PodSecurityPolicy [Feature:PodSecurityPolicy] should enforce the restricted policy.PodSecurityPolicy
Kubernetes e2e suite [sig-auth] PodSecurityPolicy [Feature:PodSecurityPolicy] should forbid pod creation when no PSP is available
Kubernetes e2e suite [sig-auth] ServiceAccount admission controller migration [Feature:BoundServiceAccountTokenVolume] master upgrade should maintain a functioning cluster
Kubernetes e2e suite [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 2 pods to 1 pod [Slow]
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Object from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver with Prometheus [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target average value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two External metrics from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two metrics of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command with --leave-stdin-open
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never, but with --rm
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support inline execution and attach
Kubernetes e2e suite [sig-cloud-provider-gcp] Addon update should propagate add-on file changes [Slow]
Kubernetes e2e suite [sig-cloud-provider-gcp] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade]
Kubernetes e2e suite [sig-cloud-provider-gcp] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
Kubernetes e2e suite [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas different zones [Serial][Disruptive]
Kubernetes e2e suite [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas multizone workers [Serial][Disruptive]
Kubernetes e2e suite [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas same zone [Serial][Disruptive]
Kubernetes e2e suite [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to add nodes
Kubernetes e2e suite [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to delete nodes
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to cadvisor port 4194 using proxy subresource
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to the readonly kubelet port 10255 using proxy subresource
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 10255 open on its all public IP addresses
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 4194 open on its all public IP addresses
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Recreate [Feature:Recreate] recreate nodes and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
Kubernetes e2e suite [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
Kubernetes e2e suite [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade]
Kubernetes e2e suite [sig-cloud-provider-gcp] [Disruptive]NodeLease NodeLease deletion node lease should be deleted when corresponding node is deleted
Kubernetes e2e suite [sig-cloud-provider] [Feature:CloudProvider][Disruptive] Nodes should be deleted on API server if it doesn't exist in the cloud provider
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
Kubernetes e2e suite [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should have accelerator metrics [Feature:StackdriverAcceleratorMonitoring]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for external metrics [Feature:StackdriverExternalMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for new resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for old resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Stackdriver Metadata Agent [Feature:StackdriverMetadataAgent]
Kubernetes e2e suite [sig-network] ClusterDns [Feature:Example] should create pod that uses dns
Kubernetes e2e suite [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS should provide DNS for the cluster [Provider:GCE]
Kubernetes e2e suite [sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
Kubernetes e2e suite [sig-network] Firewall rule control plane should not expose well-known ports
Kubernetes e2e suite [sig-network] Firewall rule should have correct firewall rules for e2e cluster
Kubernetes e2e suite [sig-network] IngressClass [Feature:Ingress] should not set default value if no default IngressClass [Serial]
Kubernetes e2e suite [sig-network] IngressClass [Feature:Ingress] should prevent Ingress creation if more than 1 IngressClass marked as default [Serial]
Kubernetes e2e suite [sig-network] IngressClass [Feature:Ingress] should set default value on new IngressClass [Serial]
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should work for type=NodePort
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should work from pods
Kubernetes e2e suite [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should reconcile LB health check interval [Slow][Serial][Disruptive]
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] rolling update backend pods should not cause service disruption
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to create a ClusterIP service
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to switch between IG and NEG modes
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should conform to Ingress spec
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should create NEGs for all ports with the Ingress annotation, and NEGs for the standalone annotation otherwise
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints for both Ingress-referenced NEG and standalone NEG
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints to NEG
Kubernetes e2e suite [sig-network] Loadbalancing: L7 Scalability GCE [Slow] [Serial] [Feature:IngressScale] Creating and updating ingresses should happen promptly with small/medium/large amount of ingresses
Kubernetes e2e suite [sig-network] Loadbalancing: L7 [Slow] Nginx should conform to Ingress spec
Kubernetes e2e suite [sig-network] Netpol API should support creating NetworkPolicy API with endport field [Feature:NetworkPolicyEndPort]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny egress from all pods in a namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny egress from pods based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny ingress from pods on other namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce ingress policy allowing any port traffic to a server on a specific protocol [Feature:NetworkPolicy] [Feature:UDP]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Multiple PodSelectors and NamespaceSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on any PodSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic for a target [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic from pods in all namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic based on NamespaceSelector with MatchLabels using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should not allow access by TCP when a policy specifies only UDP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should not mistakenly treat 'protocol: SCTP' as 'protocol: TCP', even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should properly isolate pods that are selected by a policy allowing SCTP, even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support denying of egress traffic on the client side (even if the server explicitly allows this traffic) [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should work with Ingress, Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]
Kubernetes e2e suite [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2
Kubernetes e2e suite [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
Kubernetes e2e suite [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental][LinuxOnly]
Kubernetes e2e suite [sig-network] Networking should provider Internet connection for containers using DNS [Feature:Networking-DNS]
Kubernetes e2e suite [sig-network] Networking should recreate its iptables rules if they are deleted [Disruptive]
Kubernetes e2e suite [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
Kubernetes e2e suite [sig-network] SCTP [Feature:SCTP] [LinuxOnly] should allow creating a basic SCTP service with pod and endpoints
Kubernetes e2e suite [sig-network] SCTP [Feature:SCTP] [LinuxOnly] should create a ClusterIP Service with SCTP ports
Kubernetes e2e suite [sig-network] SCTP [Feature:SCTP] [LinuxOnly] should create a Pod with SCTP HostPort
Kubernetes e2e suite [sig-network] Services GCE [Slow] should be able to create and tear down a standard-tier load balancer [Slow]
Kubernetes e2e suite [sig-network] Services should be rejected when no endpoints exist
Kubernetes e2e suite [sig-network] Services should respect internalTrafficPolicy=Local Pod (hostNetwork: true) to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [sig-network] Services should respect internalTrafficPolicy=Local Pod (hostNetwork: true) to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [sig-network] Services should respect internalTrafficPolicy=Local Pod to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [sig-network] Services should respect internalTrafficPolicy=Local Pod to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [sig-network] Services should work after restarting apiserver [Disruptive]
Kubernetes e2e suite [sig-network] Services should work after restarting kube-proxy [Disruptive]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for service endpoints using hostNetwork
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should be able to reach pod on ipv4 and ipv6 ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create a single stack service with cluster ip from primary service range
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create pod, add ipv6 and ipv4 ip to pod ips
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv4 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv4,v6 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv6 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv6,v4 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should have ipv4 and ipv6 internal node ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should have ipv4 and ipv6 node podCIDRs [LinuxOnly]
Kubernetes e2e suite [sig-network] [Feature:PerformanceDNS][Serial] Should answer DNS query for maximum number of services per cluster
Kubernetes e2e suite [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Downgrade kube-proxy from a DaemonSet to static pods should maintain a functioning cluster [Feature:KubeProxyDaemonSetDowngrade]
Kubernetes e2e suite [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Upgrade kube-proxy from static pods to a DaemonSet should maintain a functioning cluster [Feature:KubeProxyDaemonSetUpgrade]
Kubernetes e2e suite [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined
Kubernetes e2e suite [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile
Kubernetes e2e suite [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
Kubernetes e2e suite [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide default limits.hugepages-<pagesize> from node allocatable
Kubernetes e2e suite [sig-node] Kubelet [Serial] [Slow] experimental resource usage tracking [Feature:ExperimentalResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 0 pods per node
Kubernetes e2e suite [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]
Kubernetes e2e suite [sig-node] NodeProblemDetector should run without error
Kubernetes e2e suite [sig-node] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods
Kubernetes e2e suite [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
Kubernetes e2e suite [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
Kubernetes e2e suite [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling with taints [Serial]
Kubernetes e2e suite [sig-node] SSH should SSH to all nodes and run commands
Kubernetes e2e suite [sig-node] Security Context should support volume SELinux relabeling [Flaky] [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support volume SELinux relabeling when using hostIPC [Flaky] [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support volume SELinux relabeling when using hostPID [Flaky] [LinuxOnly]
Kubernetes e2e suite [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
Kubernetes e2e suite [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace
Kubernetes e2e suite [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted
Kubernetes e2e suite [sig-node] [Feature:Example] Secret should create a pod that reads a secret
Kubernetes e2e suite [sig-node] crictl should be able to run crictl on the node
Kubernetes e2e suite [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster downgrade should be able to run gpu pod after downgrade [Feature:GPUClusterDowngrade]
Kubernetes e2e suite [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster upgrade should be able to run gpu pod after upgrade [Feature:GPUClusterUpgrade]
Kubernetes e2e suite [sig-node] gpu Upgrade [Feature:GPUUpgrade] master upgrade should NOT disrupt gpu pod [Feature:GPUMasterUpgrade]
Kubernetes e2e suite [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (active) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (sleeping) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] run Nvidia GPU Device Plugin tests with a recreation
Kubernetes e2e suite [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Serial]
Kubernetes e2e suite [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Serial]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed
Kubernetes e2e suite [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI V