PR | tanjunchen: fix staticcheck failures in test/e2e/common directory |
Result | FAILURE |
Tests | 7 failed / 475 succeeded |
Started | |
Elapsed | 22m21s |
Revision | |
Builder | gke-prow-ssd-pool-1a225945-d46v |
Refs |
master:ebb7b17f 83711:a5f50e48 |
pod | 7ea1cbc0-eb2d-11e9-a721-92520fa58fd7 |
infra-commit | 78fe9feb7 |
job-version | v1.17.0-alpha.1.268+db60af884cc611 |
pod | 7ea1cbc0-eb2d-11e9-a721-92520fa58fd7 |
repo | k8s.io/kubernetes |
repo-commit | db60af884cc61199cb2821ad88fa6bef61fdf2ad |
repos | {u'k8s.io/kubernetes': u'master:ebb7b17f4dfb5ab705da1c7abedf12f03641b444,83711:a5f50e4812812ee19df0231fafd1f4c0581b6e43'} |
revision | v1.17.0-alpha.1.268+db60af884cc611 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sProbing\scontainer\swith\sreadiness\sprobe\sthat\sfails\sshould\snever\sbe\sready\sand\snever\srestart\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:691 Unexpected error: <*errors.errorString | 0xc000831650>: { s: "pod 'test-webserver-7ee83b35-7972-441e-988d-4c12f464c94e' on 'tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:19 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:19 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:19 +0000 UTC }]", } pod 'test-webserver-7ee83b35-7972-441e-988d-4c12f464c94e' on 'tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:19 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:19 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:19 +0000 UTC }] occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:108from junit_cos-stable2_08.xml
[BeforeEach] [k8s.io] Probing container /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename container-probe Oct 10 07:33:19.884: INFO: Skipping waiting for service account [BeforeEach] [k8s.io] Probing container /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:691 [AfterEach] [k8s.io] Probing container /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Collecting events from namespace "container-probe-2996". �[1mSTEP�[0m: Found 4 events. Oct 10 07:34:19.894: INFO: At 2019-10-10 07:33:20 +0000 UTC - event for test-webserver-7ee83b35-7972-441e-988d-4c12f464c94e: {kubelet tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/test-webserver:1.0" already present on machine Oct 10 07:34:19.894: INFO: At 2019-10-10 07:33:20 +0000 UTC - event for test-webserver-7ee83b35-7972-441e-988d-4c12f464c94e: {kubelet tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0} Created: Created container test-webserver Oct 10 07:34:19.894: INFO: At 2019-10-10 07:33:20 +0000 UTC - event for test-webserver-7ee83b35-7972-441e-988d-4c12f464c94e: {kubelet tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0} Started: Started container test-webserver Oct 10 07:34:19.894: INFO: At 2019-10-10 07:33:22 +0000 UTC - event for test-webserver-7ee83b35-7972-441e-988d-4c12f464c94e: {kubelet tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0} Unhealthy: Readiness probe failed: Get http://10.100.0.187:81/: dial tcp 10.100.0.187:81: connect: connection refused Oct 10 07:34:19.896: INFO: POD NODE PHASE GRACE CONDITIONS Oct 10 07:34:19.896: INFO: test-webserver-7ee83b35-7972-441e-988d-4c12f464c94e tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:19 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:19 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:19 +0000 UTC }] Oct 10 07:34:19.896: INFO: Oct 10 07:34:19.897: INFO: Logging node info for node tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0 Oct 10 07:34:19.899: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0 /api/v1/nodes/tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0 e94a8c15-461f-4b7f-b939-380e2a5db21e 3419 0 2019-10-10 07:28:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16701562880 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3885535232 0} {<nil>} 3794468Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15031406568 0} {<nil>} 15031406568 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3623391232 0} {<nil>} 3538468Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-10-10 07:34:07 +0000 UTC,LastTransitionTime:2019-10-10 07:28:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-10-10 07:34:07 +0000 UTC,LastTransitionTime:2019-10-10 07:28:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-10-10 07:34:07 +0000 UTC,LastTransitionTime:2019-10-10 07:28:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-10-10 07:34:07 +0000 UTC,LastTransitionTime:2019-10-10 07:28:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.37,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:38f78a99c39e03e8b50cdec9522d3885,SystemUUID:38F78A99-C39E-03E8-B50C-DEC9522D3885,BootID:36ae5571-151d-4075-8fdb-cb0d3c88c85f,KernelVersion:4.4.64+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://1.13.1,KubeletVersion:v1.17.0-alpha.1.268+db60af884cc611,KubeProxyVersion:v1.17.0-alpha.1.268+db60af884cc611,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db perl:5.26],SizeBytes:852903149,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:634170972,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:98707739,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:96288249,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:96286449,},ContainerImage{Names:[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest],SizeBytes:69583040,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:42323657,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:18981551,},ContainerImage{Names:[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:10039224,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5494760,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b alpine:3.7],SizeBytes:4206494,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1113554,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Oct 10 07:34:19.899: INFO: Logging kubelet events for node tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0 Oct 10 07:34:19.900: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0 Oct 10 07:34:19.903: INFO: test-webserver-bbcc903c-4112-470d-930c-aad2531d16d9 started at 2019-10-10 07:30:23 +0000 UTC (0+1 container statuses recorded) Oct 10 07:34:19.903: INFO: Container test-webserver ready: true, restart count 0 Oct 10 07:34:19.903: INFO: test-webserver-7ee83b35-7972-441e-988d-4c12f464c94e started at 2019-10-10 07:33:19 +0000 UTC (0+1 container statuses recorded) Oct 10 07:34:19.903: INFO: Container test-webserver ready: false, restart count 0 W1010 07:34:19.904211 1315 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Oct 10 07:34:19.926: INFO: Latency metrics for node tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0 Oct 10 07:34:19.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-2996" for this suite.
Find test-webserver-7ee83b35-7972-441e-988d-4c12f464c94e mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sProbing\scontainer\swith\sreadiness\sprobe\sthat\sfails\sshould\snever\sbe\sready\sand\snever\srestart\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:691 Unexpected error: <*errors.errorString | 0xc001192500>: { s: "pod 'test-webserver-aaa080aa-da29-4f26-9413-7dbf375a51b2' on 'tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:27 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:27 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:27 +0000 UTC }]", } pod 'test-webserver-aaa080aa-da29-4f26-9413-7dbf375a51b2' on 'tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:27 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:27 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:27 +0000 UTC }] occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:108from junit_cos-stable1_04.xml
[BeforeEach] [k8s.io] Probing container /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename container-probe Oct 10 07:33:27.123: INFO: Skipping waiting for service account [BeforeEach] [k8s.io] Probing container /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:691 [AfterEach] [k8s.io] Probing container /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Collecting events from namespace "container-probe-955". �[1mSTEP�[0m: Found 4 events. Oct 10 07:34:27.143: INFO: At 2019-10-10 07:33:27 +0000 UTC - event for test-webserver-aaa080aa-da29-4f26-9413-7dbf375a51b2: {kubelet tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/test-webserver:1.0" already present on machine Oct 10 07:34:27.143: INFO: At 2019-10-10 07:33:27 +0000 UTC - event for test-webserver-aaa080aa-da29-4f26-9413-7dbf375a51b2: {kubelet tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0} Created: Created container test-webserver Oct 10 07:34:27.143: INFO: At 2019-10-10 07:33:27 +0000 UTC - event for test-webserver-aaa080aa-da29-4f26-9413-7dbf375a51b2: {kubelet tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0} Started: Started container test-webserver Oct 10 07:34:27.143: INFO: At 2019-10-10 07:33:31 +0000 UTC - event for test-webserver-aaa080aa-da29-4f26-9413-7dbf375a51b2: {kubelet tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0} Unhealthy: Readiness probe failed: Get http://10.100.0.184:81/: dial tcp 10.100.0.184:81: connect: connection refused Oct 10 07:34:27.144: INFO: POD NODE PHASE GRACE CONDITIONS Oct 10 07:34:27.144: INFO: test-webserver-aaa080aa-da29-4f26-9413-7dbf375a51b2 tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:27 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:27 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:33:27 +0000 UTC }] Oct 10 07:34:27.144: INFO: Oct 10 07:34:27.146: INFO: Logging node info for node tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0 Oct 10 07:34:27.147: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0 /api/v1/nodes/tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0 7c4891d6-0676-4625-9904-3398ae0dad9a 3434 0 2019-10-10 07:28:11 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3885465600 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3623321600 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-10-10 07:34:12 +0000 UTC,LastTransitionTime:2019-10-10 07:28:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-10-10 07:34:12 +0000 UTC,LastTransitionTime:2019-10-10 07:28:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-10-10 07:34:12 +0000 UTC,LastTransitionTime:2019-10-10 07:28:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-10-10 07:34:12 +0000 UTC,LastTransitionTime:2019-10-10 07:28:07 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.24,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e2533347e31ee9c8af152f4f8483a35,SystemUUID:5E253334-7E31-EE9C-8AF1-52F4F8483A35,BootID:b61cbbae-db49-4ac1-980f-03af27c02ad0,KernelVersion:4.4.86+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-alpha.1.268+db60af884cc611,KubeProxyVersion:v1.17.0-alpha.1.268+db60af884cc611,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db perl:5.26],SizeBytes:852903149,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:634170972,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:98707739,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:96288249,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:96286449,},ContainerImage{Names:[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest],SizeBytes:69583040,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:42323657,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:18981551,},ContainerImage{Names:[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:10039224,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5494760,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b alpine:3.7],SizeBytes:4206494,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1113554,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Oct 10 07:34:27.147: INFO: Logging kubelet events for node tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0 Oct 10 07:34:27.149: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0 Oct 10 07:34:27.151: INFO: test-webserver-de30b0e1-d95c-4ffa-a2ab-68ec3e199e3c started at 2019-10-10 07:30:28 +0000 UTC (0+1 container statuses recorded) Oct 10 07:34:27.151: INFO: Container test-webserver ready: true, restart count 0 Oct 10 07:34:27.151: INFO: test-webserver-aaa080aa-da29-4f26-9413-7dbf375a51b2 started at 2019-10-10 07:33:27 +0000 UTC (0+1 container statuses recorded) Oct 10 07:34:27.151: INFO: Container test-webserver ready: false, restart count 0 W1010 07:34:27.153483 1334 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Oct 10 07:34:27.180: INFO: Latency metrics for node tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0 Oct 10 07:34:27.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-955" for this suite.
Find test-webserver-aaa080aa-da29-4f26-9413-7dbf375a51b2 mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sProbing\scontainer\swith\sreadiness\sprobe\sthat\sfails\sshould\snever\sbe\sready\sand\snever\srestart\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:691 Unexpected error: <*errors.errorString | 0xc0010014b0>: { s: "pod 'test-webserver-efb0b8a1-eaec-49b1-9db3-6d65adc9b17b' on 'tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:30:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:30:09 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:30:09 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:30:09 +0000 UTC }]", } pod 'test-webserver-efb0b8a1-eaec-49b1-9db3-6d65adc9b17b' on 'tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:30:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:30:09 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:30:09 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:30:09 +0000 UTC }] occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:108from junit_ubuntu_07.xml
[BeforeEach] [k8s.io] Probing container /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename container-probe Oct 10 07:30:09.468: INFO: Skipping waiting for service account [BeforeEach] [k8s.io] Probing container /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:691 [AfterEach] [k8s.io] Probing container /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Collecting events from namespace "container-probe-1918". �[1mSTEP�[0m: Found 4 events. Oct 10 07:31:09.479: INFO: At 2019-10-10 07:30:10 +0000 UTC - event for test-webserver-efb0b8a1-eaec-49b1-9db3-6d65adc9b17b: {kubelet tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/test-webserver:1.0" already present on machine Oct 10 07:31:09.479: INFO: At 2019-10-10 07:30:10 +0000 UTC - event for test-webserver-efb0b8a1-eaec-49b1-9db3-6d65adc9b17b: {kubelet tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113} Created: Created container test-webserver Oct 10 07:31:09.479: INFO: At 2019-10-10 07:30:10 +0000 UTC - event for test-webserver-efb0b8a1-eaec-49b1-9db3-6d65adc9b17b: {kubelet tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113} Started: Started container test-webserver Oct 10 07:31:09.479: INFO: At 2019-10-10 07:30:11 +0000 UTC - event for test-webserver-efb0b8a1-eaec-49b1-9db3-6d65adc9b17b: {kubelet tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113} Unhealthy: Readiness probe failed: Get http://10.100.0.91:81/: dial tcp 10.100.0.91:81: connect: connection refused Oct 10 07:31:09.480: INFO: POD NODE PHASE GRACE CONDITIONS Oct 10 07:31:09.480: INFO: test-webserver-efb0b8a1-eaec-49b1-9db3-6d65adc9b17b tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:30:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:30:09 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:30:09 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:30:09 +0000 UTC }] Oct 10 07:31:09.480: INFO: Oct 10 07:31:09.481: INFO: Logging node info for node tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113 Oct 10 07:31:09.483: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113 /api/v1/nodes/tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113 d46ac587-6ac4-4204-949c-264809b104f2 1587 0 2019-10-10 07:28:13 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3872571392 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3610427392 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-10-10 07:30:13 +0000 UTC,LastTransitionTime:2019-10-10 07:28:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-10-10 07:30:13 +0000 UTC,LastTransitionTime:2019-10-10 07:28:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-10-10 07:30:13 +0000 UTC,LastTransitionTime:2019-10-10 07:28:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-10-10 07:30:13 +0000 UTC,LastTransitionTime:2019-10-10 07:28:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.38,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4ee4b282710d793399488442b9990d97,SystemUUID:4EE4B282-710D-7933-9948-8442B9990D97,BootID:65a2637e-05a8-4592-9103-98aad2c8e4e8,KernelVersion:4.15.0-1023-gcp,OSImage:Ubuntu 18.04.1 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-alpha.1.268+db60af884cc611,KubeProxyVersion:v1.17.0-alpha.1.268+db60af884cc611,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db perl:5.26],SizeBytes:852903149,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:634170972,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:98707739,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:96288249,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:96286449,},ContainerImage{Names:[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest],SizeBytes:69583040,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:42323657,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:18981551,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:10039224,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5494760,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 alpine:3.7],SizeBytes:4206494,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1113554,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Oct 10 07:31:09.484: INFO: Logging kubelet events for node tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113 Oct 10 07:31:09.485: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113 Oct 10 07:31:09.489: INFO: test-webserver-3fbde4c1-cf09-4382-972c-ad0ba794e0b7 started at 2019-10-10 07:30:48 +0000 UTC (0+1 container statuses recorded) Oct 10 07:31:09.489: INFO: Container test-webserver ready: true, restart count 0 Oct 10 07:31:09.489: INFO: image-pull-test43d1e958-8205-430b-a288-2804c0077978 started at 2019-10-10 07:30:05 +0000 UTC (0+1 container statuses recorded) Oct 10 07:31:09.489: INFO: Container image-pull-test ready: false, restart count 0 Oct 10 07:31:09.489: INFO: pod-projected-secrets-c7520542-2d7f-403d-912c-4522a60ca546 started at 2019-10-10 07:29:37 +0000 UTC (0+3 container statuses recorded) Oct 10 07:31:09.489: INFO: Container creates-volume-test ready: true, restart count 0 Oct 10 07:31:09.489: INFO: Container dels-volume-test ready: true, restart count 0 Oct 10 07:31:09.489: INFO: Container upds-volume-test ready: true, restart count 0 Oct 10 07:31:09.489: INFO: pod-projected-configmaps-66e93e1c-1adf-4fa2-82aa-6e59807e6ed6 started at 2019-10-10 07:30:34 +0000 UTC (0+3 container statuses recorded) Oct 10 07:31:09.489: INFO: Container createcm-volume-test ready: true, restart count 0 Oct 10 07:31:09.489: INFO: Container delcm-volume-test ready: true, restart count 0 Oct 10 07:31:09.489: INFO: Container updcm-volume-test ready: true, restart count 0 Oct 10 07:31:09.489: INFO: static-pod-eb62e5a2-4b4b-434f-a5b7-79056ed13ae3-tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113 started at 2019-10-10 07:30:00 +0000 UTC (0+1 container statuses recorded) Oct 10 07:31:09.489: INFO: Container test ready: true, restart count 0 Oct 10 07:31:09.489: INFO: stats-busybox-0 started at 2019-10-10 07:31:06 +0000 UTC (0+1 container statuses recorded) Oct 10 07:31:09.489: INFO: Container busybox-container ready: false, restart count 0 Oct 10 07:31:09.489: INFO: pod-with-prestop-http-hook started at 2019-10-10 07:31:08 +0000 UTC (0+1 container statuses recorded) Oct 10 07:31:09.489: INFO: Container pod-with-prestop-http-hook ready: false, restart count 0 Oct 10 07:31:09.489: INFO: liveness-d9159aa2-be77-432b-8a73-8582f0a6a4b4 started at 2019-10-10 07:29:09 +0000 UTC (0+1 container statuses recorded) Oct 10 07:31:09.489: INFO: Container liveness ready: true, restart count 0 Oct 10 07:31:09.489: INFO: test-webserver-efb0b8a1-eaec-49b1-9db3-6d65adc9b17b started at 2019-10-10 07:30:09 +0000 UTC (0+1 container statuses recorded) Oct 10 07:31:09.489: INFO: Container test-webserver ready: false, restart count 0 Oct 10 07:31:09.489: INFO: pod-configmaps-223461f5-2f56-425c-a3d9-af8405cc23c0 started at 2019-10-10 07:31:02 +0000 UTC (0+1 container statuses recorded) Oct 10 07:31:09.489: INFO: Container configmap-volume-test ready: true, restart count 0 Oct 10 07:31:09.489: INFO: stats-busybox-1 started at 2019-10-10 07:31:07 +0000 UTC (0+1 container statuses recorded) Oct 10 07:31:09.489: INFO: Container busybox-container ready: true, restart count 0 Oct 10 07:31:09.489: INFO: pod-handle-http-request started at 2019-10-10 07:31:06 +0000 UTC (0+1 container statuses recorded) Oct 10 07:31:09.489: INFO: Container pod-handle-http-request ready: true, restart count 0 W1010 07:31:09.491029 2692 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Oct 10 07:31:09.537: INFO: Latency metrics for node tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113 Oct 10 07:31:09.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-1918" for this suite.
Find test-webserver-efb0b8a1-eaec-49b1-9db3-6d65adc9b17b mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sProbing\scontainer\swith\sreadiness\sprobe\sthat\sfails\sshould\snever\sbe\sready\sand\snever\srestart\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:691 Unexpected error: <*errors.errorString | 0xc0006b0cc0>: { s: "pod 'test-webserver-7c630928-711a-4228-a897-e41552e09d7b' on 'tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:19 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:19 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:19 +0000 UTC }]", } pod 'test-webserver-7c630928-711a-4228-a897-e41552e09d7b' on 'tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:19 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:19 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:19 +0000 UTC }] occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:108from junit_cos-stable2_08.xml
[BeforeEach] [k8s.io] Probing container /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename container-probe Oct 10 07:32:19.777: INFO: Skipping waiting for service account [BeforeEach] [k8s.io] Probing container /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:691 [AfterEach] [k8s.io] Probing container /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Collecting events from namespace "container-probe-6479". �[1mSTEP�[0m: Found 4 events. Oct 10 07:33:19.838: INFO: At 2019-10-10 07:32:20 +0000 UTC - event for test-webserver-7c630928-711a-4228-a897-e41552e09d7b: {kubelet tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/test-webserver:1.0" already present on machine Oct 10 07:33:19.838: INFO: At 2019-10-10 07:32:20 +0000 UTC - event for test-webserver-7c630928-711a-4228-a897-e41552e09d7b: {kubelet tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0} Created: Created container test-webserver Oct 10 07:33:19.838: INFO: At 2019-10-10 07:32:20 +0000 UTC - event for test-webserver-7c630928-711a-4228-a897-e41552e09d7b: {kubelet tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0} Started: Started container test-webserver Oct 10 07:33:19.838: INFO: At 2019-10-10 07:32:22 +0000 UTC - event for test-webserver-7c630928-711a-4228-a897-e41552e09d7b: {kubelet tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0} Unhealthy: Readiness probe failed: Get http://10.100.0.161:81/: dial tcp 10.100.0.161:81: connect: connection refused Oct 10 07:33:19.840: INFO: POD NODE PHASE GRACE CONDITIONS Oct 10 07:33:19.840: INFO: test-webserver-7c630928-711a-4228-a897-e41552e09d7b tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:19 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:19 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:19 +0000 UTC }] Oct 10 07:33:19.840: INFO: Oct 10 07:33:19.841: INFO: Logging node info for node tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0 Oct 10 07:33:19.844: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0 /api/v1/nodes/tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0 e94a8c15-461f-4b7f-b939-380e2a5db21e 3298 0 2019-10-10 07:28:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16701562880 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3885535232 0} {<nil>} 3794468Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15031406568 0} {<nil>} 15031406568 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3623391232 0} {<nil>} 3538468Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-10-10 07:33:07 +0000 UTC,LastTransitionTime:2019-10-10 07:28:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-10-10 07:33:07 +0000 UTC,LastTransitionTime:2019-10-10 07:28:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-10-10 07:33:07 +0000 UTC,LastTransitionTime:2019-10-10 07:28:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-10-10 07:33:07 +0000 UTC,LastTransitionTime:2019-10-10 07:28:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.37,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:38f78a99c39e03e8b50cdec9522d3885,SystemUUID:38F78A99-C39E-03E8-B50C-DEC9522D3885,BootID:36ae5571-151d-4075-8fdb-cb0d3c88c85f,KernelVersion:4.4.64+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://1.13.1,KubeletVersion:v1.17.0-alpha.1.268+db60af884cc611,KubeProxyVersion:v1.17.0-alpha.1.268+db60af884cc611,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db perl:5.26],SizeBytes:852903149,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:634170972,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:98707739,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:96288249,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:96286449,},ContainerImage{Names:[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest],SizeBytes:69583040,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:42323657,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:18981551,},ContainerImage{Names:[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:10039224,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5494760,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b alpine:3.7],SizeBytes:4206494,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1113554,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Oct 10 07:33:19.844: INFO: Logging kubelet events for node tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0 Oct 10 07:33:19.845: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0 Oct 10 07:33:19.848: INFO: test-webserver-bbcc903c-4112-470d-930c-aad2531d16d9 started at 2019-10-10 07:30:23 +0000 UTC (0+1 container statuses recorded) Oct 10 07:33:19.848: INFO: Container test-webserver ready: true, restart count 0 Oct 10 07:33:19.848: INFO: busybox-c4b341d7-0b3b-48bf-8572-072d4e4e070f started at 2019-10-10 07:29:19 +0000 UTC (0+1 container statuses recorded) Oct 10 07:33:19.848: INFO: Container busybox ready: true, restart count 0 Oct 10 07:33:19.848: INFO: pod-handle-http-request started at 2019-10-10 07:32:57 +0000 UTC (0+1 container statuses recorded) Oct 10 07:33:19.848: INFO: Container pod-handle-http-request ready: false, restart count 0 Oct 10 07:33:19.848: INFO: test-webserver-7c630928-711a-4228-a897-e41552e09d7b started at 2019-10-10 07:32:19 +0000 UTC (0+1 container statuses recorded) Oct 10 07:33:19.848: INFO: Container test-webserver ready: false, restart count 0 Oct 10 07:33:19.848: INFO: image-pull-test8785396d-adcc-492e-9362-501e44bef7cf started at 2019-10-10 07:28:54 +0000 UTC (0+1 container statuses recorded) Oct 10 07:33:19.848: INFO: Container image-pull-test ready: false, restart count 0 Oct 10 07:33:19.848: INFO: static-pod-3bd99d76-0900-4add-ad22-437684a841ba-tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0 started at 2019-10-10 07:32:07 +0000 UTC (0+1 container statuses recorded) Oct 10 07:33:19.848: INFO: Container test ready: true, restart count 0 W1010 07:33:19.849789 1315 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Oct 10 07:33:19.877: INFO: Latency metrics for node tmp-node-e2e-150eaf11-cos-stable-60-9592-84-0 Oct 10 07:33:19.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-6479" for this suite.
Find test-webserver-7c630928-711a-4228-a897-e41552e09d7b mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sProbing\scontainer\swith\sreadiness\sprobe\sthat\sfails\sshould\snever\sbe\sready\sand\snever\srestart\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:691 Unexpected error: <*errors.errorString | 0xc0008f4920>: { s: "pod 'test-webserver-0a6e2e71-2d5b-45a3-94c3-2f5672bc685f' on 'tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:27 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:27 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:27 +0000 UTC }]", } pod 'test-webserver-0a6e2e71-2d5b-45a3-94c3-2f5672bc685f' on 'tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:27 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:27 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:27 +0000 UTC }] occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:108from junit_cos-stable1_04.xml
[BeforeEach] [k8s.io] Probing container /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename container-probe Oct 10 07:32:26.966: INFO: Skipping waiting for service account [BeforeEach] [k8s.io] Probing container /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:691 [AfterEach] [k8s.io] Probing container /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Collecting events from namespace "container-probe-6123". �[1mSTEP�[0m: Found 4 events. Oct 10 07:33:27.031: INFO: At 2019-10-10 07:32:27 +0000 UTC - event for test-webserver-0a6e2e71-2d5b-45a3-94c3-2f5672bc685f: {kubelet tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/test-webserver:1.0" already present on machine Oct 10 07:33:27.031: INFO: At 2019-10-10 07:32:27 +0000 UTC - event for test-webserver-0a6e2e71-2d5b-45a3-94c3-2f5672bc685f: {kubelet tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0} Created: Created container test-webserver Oct 10 07:33:27.031: INFO: At 2019-10-10 07:32:27 +0000 UTC - event for test-webserver-0a6e2e71-2d5b-45a3-94c3-2f5672bc685f: {kubelet tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0} Started: Started container test-webserver Oct 10 07:33:27.031: INFO: At 2019-10-10 07:32:33 +0000 UTC - event for test-webserver-0a6e2e71-2d5b-45a3-94c3-2f5672bc685f: {kubelet tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0} Unhealthy: Readiness probe failed: Get http://10.100.0.159:81/: dial tcp 10.100.0.159:81: connect: connection refused Oct 10 07:33:27.033: INFO: POD NODE PHASE GRACE CONDITIONS Oct 10 07:33:27.033: INFO: test-webserver-0a6e2e71-2d5b-45a3-94c3-2f5672bc685f tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:27 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:27 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:32:27 +0000 UTC }] Oct 10 07:33:27.033: INFO: Oct 10 07:33:27.035: INFO: Logging node info for node tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0 Oct 10 07:33:27.037: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0 /api/v1/nodes/tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0 7c4891d6-0676-4625-9904-3398ae0dad9a 3264 0 2019-10-10 07:28:11 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3885465600 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3623321600 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-10-10 07:33:12 +0000 UTC,LastTransitionTime:2019-10-10 07:28:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-10-10 07:33:12 +0000 UTC,LastTransitionTime:2019-10-10 07:28:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-10-10 07:33:12 +0000 UTC,LastTransitionTime:2019-10-10 07:28:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-10-10 07:33:12 +0000 UTC,LastTransitionTime:2019-10-10 07:28:07 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.24,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e2533347e31ee9c8af152f4f8483a35,SystemUUID:5E253334-7E31-EE9C-8AF1-52F4F8483A35,BootID:b61cbbae-db49-4ac1-980f-03af27c02ad0,KernelVersion:4.4.86+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-alpha.1.268+db60af884cc611,KubeProxyVersion:v1.17.0-alpha.1.268+db60af884cc611,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db perl:5.26],SizeBytes:852903149,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:634170972,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:98707739,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:96288249,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:96286449,},ContainerImage{Names:[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest],SizeBytes:69583040,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:42323657,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:18981551,},ContainerImage{Names:[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:10039224,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5494760,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b alpine:3.7],SizeBytes:4206494,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1113554,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Oct 10 07:33:27.038: INFO: Logging kubelet events for node tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0 Oct 10 07:33:27.039: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0 Oct 10 07:33:27.043: INFO: busybox-5e7fc69d-fb9d-46a2-abbc-d95a7356bd20 started at 2019-10-10 07:29:25 +0000 UTC (0+1 container statuses recorded) Oct 10 07:33:27.043: INFO: Container busybox ready: true, restart count 0 Oct 10 07:33:27.043: INFO: test-webserver-de30b0e1-d95c-4ffa-a2ab-68ec3e199e3c started at 2019-10-10 07:30:28 +0000 UTC (0+1 container statuses recorded) Oct 10 07:33:27.043: INFO: Container test-webserver ready: true, restart count 0 Oct 10 07:33:27.043: INFO: pod-handle-http-request started at 2019-10-10 07:33:11 +0000 UTC (0+1 container statuses recorded) Oct 10 07:33:27.043: INFO: Container pod-handle-http-request ready: false, restart count 0 Oct 10 07:33:27.043: INFO: test-container-pod started at 2019-10-10 07:33:24 +0000 UTC (0+1 container statuses recorded) Oct 10 07:33:27.043: INFO: Container webserver ready: true, restart count 0 Oct 10 07:33:27.043: INFO: test-webserver-0a6e2e71-2d5b-45a3-94c3-2f5672bc685f started at 2019-10-10 07:32:27 +0000 UTC (0+1 container statuses recorded) Oct 10 07:33:27.043: INFO: Container test-webserver ready: false, restart count 0 Oct 10 07:33:27.043: INFO: static-pod-d44c67af-db35-4f95-9fc8-d07e12a9ac39-tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0 started at 2019-10-10 07:32:08 +0000 UTC (0+1 container statuses recorded) Oct 10 07:33:27.043: INFO: Container test ready: true, restart count 0 Oct 10 07:33:27.043: INFO: pod-projected-configmaps-db0822a0-5288-4817-8a56-1335c3e2d147 started at 2019-10-10 07:32:39 +0000 UTC (0+1 container statuses recorded) Oct 10 07:33:27.043: INFO: Container projected-configmap-volume-test ready: true, restart count 0 Oct 10 07:33:27.043: INFO: image-pull-teste12cc12f-f859-4bf0-91f8-d6fd73a96358 started at 2019-10-10 07:29:00 +0000 UTC (0+1 container statuses recorded) Oct 10 07:33:27.043: INFO: Container image-pull-test ready: false, restart count 0 Oct 10 07:33:27.043: INFO: netserver-0 started at 2019-10-10 07:33:04 +0000 UTC (0+1 container statuses recorded) Oct 10 07:33:27.043: INFO: Container webserver ready: true, restart count 0 Oct 10 07:33:27.043: INFO: host-test-container-pod started at 2019-10-10 07:33:24 +0000 UTC (0+1 container statuses recorded) Oct 10 07:33:27.043: INFO: Container agnhost ready: true, restart count 0 W1010 07:33:27.045549 1334 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Oct 10 07:33:27.097: INFO: Latency metrics for node tmp-node-e2e-150eaf11-cos-stable-63-10032-71-0 Oct 10 07:33:27.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-6123" for this suite.
Find test-webserver-0a6e2e71-2d5b-45a3-94c3-2f5672bc685f mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sProbing\scontainer\swith\sreadiness\sprobe\sthat\sfails\sshould\snever\sbe\sready\sand\snever\srestart\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:691 Unexpected error: <*errors.errorString | 0xc000f163a0>: { s: "pod 'test-webserver-7c82f071-76b3-4e72-9c35-d760adaa4da9' on 'tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:31:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:31:09 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:31:09 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:31:09 +0000 UTC }]", } pod 'test-webserver-7c82f071-76b3-4e72-9c35-d760adaa4da9' on 'tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:31:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:31:09 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:31:09 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:31:09 +0000 UTC }] occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:108from junit_ubuntu_07.xml
[BeforeEach] [k8s.io] Probing container /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename container-probe Oct 10 07:31:09.569: INFO: Skipping waiting for service account [BeforeEach] [k8s.io] Probing container /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:691 [AfterEach] [k8s.io] Probing container /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Collecting events from namespace "container-probe-5148". �[1mSTEP�[0m: Found 4 events. Oct 10 07:32:09.623: INFO: At 2019-10-10 07:31:10 +0000 UTC - event for test-webserver-7c82f071-76b3-4e72-9c35-d760adaa4da9: {kubelet tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/test-webserver:1.0" already present on machine Oct 10 07:32:09.623: INFO: At 2019-10-10 07:31:10 +0000 UTC - event for test-webserver-7c82f071-76b3-4e72-9c35-d760adaa4da9: {kubelet tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113} Created: Created container test-webserver Oct 10 07:32:09.623: INFO: At 2019-10-10 07:31:10 +0000 UTC - event for test-webserver-7c82f071-76b3-4e72-9c35-d760adaa4da9: {kubelet tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113} Started: Started container test-webserver Oct 10 07:32:09.623: INFO: At 2019-10-10 07:31:13 +0000 UTC - event for test-webserver-7c82f071-76b3-4e72-9c35-d760adaa4da9: {kubelet tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113} Unhealthy: Readiness probe failed: Get http://10.100.0.128:81/: dial tcp 10.100.0.128:81: connect: connection refused Oct 10 07:32:09.624: INFO: POD NODE PHASE GRACE CONDITIONS Oct 10 07:32:09.624: INFO: test-webserver-7c82f071-76b3-4e72-9c35-d760adaa4da9 tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:31:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:31:09 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:31:09 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-10-10 07:31:09 +0000 UTC }] Oct 10 07:32:09.625: INFO: Oct 10 07:32:09.626: INFO: Logging node info for node tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113 Oct 10 07:32:09.627: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113 /api/v1/nodes/tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113 d46ac587-6ac4-4204-949c-264809b104f2 2227 0 2019-10-10 07:28:13 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3872571392 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3610427392 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-10-10 07:31:13 +0000 UTC,LastTransitionTime:2019-10-10 07:28:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-10-10 07:31:13 +0000 UTC,LastTransitionTime:2019-10-10 07:28:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-10-10 07:31:13 +0000 UTC,LastTransitionTime:2019-10-10 07:28:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-10-10 07:31:13 +0000 UTC,LastTransitionTime:2019-10-10 07:28:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.38,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4ee4b282710d793399488442b9990d97,SystemUUID:4EE4B282-710D-7933-9948-8442B9990D97,BootID:65a2637e-05a8-4592-9103-98aad2c8e4e8,KernelVersion:4.15.0-1023-gcp,OSImage:Ubuntu 18.04.1 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.17.0-alpha.1.268+db60af884cc611,KubeProxyVersion:v1.17.0-alpha.1.268+db60af884cc611,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db perl:5.26],SizeBytes:852903149,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:634170972,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:98707739,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:96288249,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:96286449,},ContainerImage{Names:[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest],SizeBytes:69583040,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:42323657,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:18981551,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:10039224,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5494760,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 alpine:3.7],SizeBytes:4206494,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1113554,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Oct 10 07:32:09.627: INFO: Logging kubelet events for node tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113 Oct 10 07:32:09.629: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113 Oct 10 07:32:09.638: INFO: stats-busybox-1 started at 2019-10-10 07:31:07 +0000 UTC (0+1 container statuses recorded) Oct 10 07:32:09.638: INFO: Container busybox-container ready: true, restart count 1 Oct 10 07:32:09.638: INFO: busybox-scheduling-47a2acc4-2790-4ffb-a6d1-f02e547c3985 started at 2019-10-10 07:31:31 +0000 UTC (0+1 container statuses recorded) Oct 10 07:32:09.638: INFO: Container busybox-scheduling-47a2acc4-2790-4ffb-a6d1-f02e547c3985 ready: false, restart count 0 Oct 10 07:32:09.638: INFO: test-webserver-3fbde4c1-cf09-4382-972c-ad0ba794e0b7 started at 2019-10-10 07:30:48 +0000 UTC (0+1 container statuses recorded) Oct 10 07:32:09.638: INFO: Container test-webserver ready: true, restart count 0 Oct 10 07:32:09.641: INFO: pod-983ce3b4-aba4-40ec-bbbd-25b74ae0c1c4 started at 2019-10-10 07:32:08 +0000 UTC (0+1 container statuses recorded) Oct 10 07:32:09.641: INFO: Container test-container ready: false, restart count 0 Oct 10 07:32:09.641: INFO: image-pull-test43d1e958-8205-430b-a288-2804c0077978 started at 2019-10-10 07:30:05 +0000 UTC (0+1 container statuses recorded) Oct 10 07:32:09.641: INFO: Container image-pull-test ready: false, restart count 0 Oct 10 07:32:09.641: INFO: test-pod started at 2019-10-10 07:32:01 +0000 UTC (0+3 container statuses recorded) Oct 10 07:32:09.641: INFO: Container busybox-1 ready: true, restart count 0 Oct 10 07:32:09.641: INFO: Container busybox-2 ready: true, restart count 0 Oct 10 07:32:09.641: INFO: Container busybox-3 ready: true, restart count 0 Oct 10 07:32:09.641: INFO: labelsupdate91434132-5981-4043-90ae-63771c2f3e06 started at 2019-10-10 07:31:58 +0000 UTC (0+1 container statuses recorded) Oct 10 07:32:09.641: INFO: Container client-container ready: true, restart count 0 Oct 10 07:32:09.641: INFO: pod-projected-configmaps-66e93e1c-1adf-4fa2-82aa-6e59807e6ed6 started at 2019-10-10 07:30:34 +0000 UTC (0+3 container statuses recorded) Oct 10 07:32:09.641: INFO: Container createcm-volume-test ready: false, restart count 0 Oct 10 07:32:09.641: INFO: Container delcm-volume-test ready: false, restart count 0 Oct 10 07:32:09.641: INFO: Container updcm-volume-test ready: false, restart count 0 Oct 10 07:32:09.641: INFO: test-host-network-pod started at 2019-10-10 07:32:05 +0000 UTC (0+2 container statuses recorded) Oct 10 07:32:09.641: INFO: Container busybox-1 ready: true, restart count 0 Oct 10 07:32:09.641: INFO: Container busybox-2 ready: true, restart count 0 Oct 10 07:32:09.641: INFO: stats-busybox-0 started at 2019-10-10 07:31:06 +0000 UTC (0+1 container statuses recorded) Oct 10 07:32:09.641: INFO: Container busybox-container ready: true, restart count 1 Oct 10 07:32:09.641: INFO: image-pull-test81021262-6f83-4bcd-bfee-754e0a64c419 started at 2019-10-10 07:32:06 +0000 UTC (0+1 container statuses recorded) Oct 10 07:32:09.641: INFO: Container image-pull-test ready: false, restart count 0 Oct 10 07:32:09.641: INFO: test-webserver-7c82f071-76b3-4e72-9c35-d760adaa4da9 started at 2019-10-10 07:31:09 +0000 UTC (0+1 container statuses recorded) Oct 10 07:32:09.641: INFO: Container test-webserver ready: false, restart count 0 Oct 10 07:32:09.641: INFO: pod-init-e6d339e8-ec6a-453d-8e8b-be05955bd0fd started at 2019-10-10 07:32:06 +0000 UTC (2+1 container statuses recorded) Oct 10 07:32:09.641: INFO: Init container init1 ready: true, restart count 0 Oct 10 07:32:09.641: INFO: Init container init2 ready: true, restart count 0 Oct 10 07:32:09.641: INFO: Container run1 ready: true, restart count 0 Oct 10 07:32:09.641: INFO: liveness-d9159aa2-be77-432b-8a73-8582f0a6a4b4 started at 2019-10-10 07:29:09 +0000 UTC (0+1 container statuses recorded) Oct 10 07:32:09.641: INFO: Container liveness ready: true, restart count 0 W1010 07:32:09.651160 2692 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Oct 10 07:32:09.764: INFO: Latency metrics for node tmp-node-e2e-150eaf11-ubuntu-gke-1804-d1703-0-v20181113 Oct 10 07:32:09.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-5148" for this suite.
Find test-webserver-7c82f071-76b3-4e72-9c35-d760adaa4da9 mentions in log files | View test history on testgrid
error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
Deferred TearDown
DumpClusterLogs
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should be able to pull from private registry with credential provider [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should be able to pull from private registry with credential provider [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should be able to pull from private registry with credential provider [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from docker hub [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from docker hub [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from docker hub [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from gcr.io [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from gcr.io [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from gcr.io [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull non-existing image from gcr.io [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull non-existing image from gcr.io [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull non-existing image from gcr.io [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct cri log path
E2eNode Suite [k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct cri log path
E2eNode Suite [k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct cri log path
E2eNode Suite [k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct log path
E2eNode Suite [k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct log path
E2eNode Suite [k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct log path
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a BestEffort Pod Pod containers should have been created under the BestEffort cgroup
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a BestEffort Pod Pod containers should have been created under the BestEffort cgroup
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a BestEffort Pod Pod containers should have been created under the BestEffort cgroup
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Burstable Pod Pod containers should have been created under the Burstable cgroup
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Burstable Pod Pod containers should have been created under the Burstable cgroup
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Burstable Pod Pod containers should have been created under the Burstable cgroup
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Guaranteed Pod Pod containers should have been created under the cgroup-root
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Guaranteed Pod Pod containers should have been created under the cgroup-root
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Guaranteed Pod Pod containers should have been created under the cgroup-root
E2eNode Suite [k8s.io] Kubelet Cgroup Manager QOS containers On enabling QOS cgroup hierarchy Top level QoS containers should have been created [NodeConformance]
E2eNode Suite [k8s.io] Kubelet Cgroup Manager QOS containers On enabling QOS cgroup hierarchy Top level QoS containers should have been created [NodeConformance]
E2eNode Suite [k8s.io] Kubelet Cgroup Manager QOS containers On enabling QOS cgroup hierarchy Top level QoS containers should have been created [NodeConformance]
E2eNode Suite [k8s.io] Kubelet Volume Manager Volume Manager On terminatation of pod with memory backed volume should remove the volume from the node [NodeConformance]
E2eNode Suite [k8s.io] Kubelet Volume Manager Volume Manager On terminatation of pod with memory backed volume should remove the volume from the node [NodeConformance]
E2eNode Suite [k8s.io] Kubelet Volume Manager Volume Manager On terminatation of pod with memory backed volume should remove the volume from the node [NodeConformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod forcibly deleted [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod forcibly deleted [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod forcibly deleted [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod gracefully deleted [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod gracefully deleted [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod gracefully deleted [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be updated when static pod updated [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be updated when static pod updated [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be updated when static pod updated [NodeConformance]
E2eNode Suite [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should be updated [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should be updated [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should be updated [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
E2eNode Suite [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
E2eNode Suite [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]
E2eNode Suite [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Summary API [NodeConformance] when querying /stats/summary should report resource usage through the stats api
E2eNode Suite [k8s.io] Summary API [NodeConformance] when querying /stats/summary should report resource usage through the stats api
E2eNode Suite [k8s.io] Summary API [NodeConformance] when querying /stats/summary should report resource usage through the stats api
E2eNode Suite [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] HostPath should support r/w [NodeConformance]
E2eNode Suite [sig-storage] HostPath should support r/w [NodeConformance]
E2eNode Suite [sig-storage] HostPath should support r/w [NodeConformance]
E2eNode Suite [sig-storage] HostPath should support subPath [NodeConformance]
E2eNode Suite [sig-storage] HostPath should support subPath [NodeConformance]
E2eNode Suite [sig-storage] HostPath should support subPath [NodeConformance]
E2eNode Suite [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
E2eNode Suite [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
E2eNode Suite [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
TearDown
TearDown Previous
Timeout
Up
test setup
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a permissive profile
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a permissive profile
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a permissive profile
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a profile blocking writes
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a profile blocking writes
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a profile blocking writes
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should reject an unloaded profile
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should reject an unloaded profile
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should reject an unloaded profile
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup Kubelet's oom-score-adj should be -999
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup Kubelet's oom-score-adj should be -999
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup Kubelet's oom-score-adj should be -999
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup burstable container's oom-score-adj should be between [2, 1000)
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup burstable container's oom-score-adj should be between [2, 1000)
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup burstable container's oom-score-adj should be between [2, 1000)
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup container runtime's oom-score-adj should be -999
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup container runtime's oom-score-adj should be -999
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup container runtime's oom-score-adj should be -999
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup guaranteed container's oom-score-adj should be -998
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup guaranteed container's oom-score-adj should be -998
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup guaranteed container's oom-score-adj should be -998
E2eNode Suite [k8s.io] ContainerLogRotation [Slow] [Serial] [Disruptive] when a container generates a lot of log should be rotated and limited to a fixed amount of files
E2eNode Suite [k8s.io] ContainerLogRotation [Slow] [Serial] [Disruptive] when a container generates a lot of log should be rotated and limited to a fixed amount of files
E2eNode Suite [k8s.io] ContainerLogRotation [Slow] [Serial] [Disruptive] when a container generates a lot of log should be rotated and limited to a fixed amount of files
E2eNode Suite [k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod
E2eNode Suite [k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod
E2eNode Suite [k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 0s interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 0s interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 0s interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 100ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 100ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 100ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 300ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 300ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 300ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 30 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 30 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 30 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 50 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 50 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 50 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin Verifies the Kubelet device plugin functionality.
E2eNode Suite [k8s.io] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin Verifies the Kubelet device plugin functionality.
E2eNode Suite [k8s.io] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin Verifies the Kubelet device plugin functionality.
E2eNode Suite [k8s.io] Docker features [Feature:Docker][Legacy:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts
E2eNode Suite [k8s.io] Docker features [Feature:Docker][Legacy:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts
E2eNode Suite [k8s.io] Docker features [Feature:Docker][Legacy:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts
E2eNode Suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
E2eNode Suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
E2eNode Suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
E2eNode Suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
E2eNode Suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
E2eNode Suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The GCR is accessible
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The GCR is accessible
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The GCR is accessible
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker configuration validation should pass
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker configuration validation should pass
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker configuration validation should pass
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker container network should work
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker container network should work
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker container network should work
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker daemon should support AppArmor and seccomp
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker daemon should support AppArmor and seccomp
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker daemon should support AppArmor and seccomp
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker storage driver should work
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker storage driver should work
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker storage driver should work
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The iptable rules should work (required by kube-proxy)
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The iptable rules should work (required by kube-proxy)
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The iptable rules should work (required by kube-proxy)
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The required processes should be running
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The required processes should be running
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The required processes should be running
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] ImageGCNoEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] ImageGCNoEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] ImageGCNoEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] ImageID [NodeFeature: ImageID] should be set to the manifest digest (from RepoDigests) when available
E2eNode Suite [k8s.io] ImageID [NodeFeature: ImageID] should be set to the manifest digest (from RepoDigests) when available
E2eNode Suite [k8s.io] ImageID [NodeFeature: ImageID] should be set to the manifest digest (from RepoDigests) when available
E2eNode Suite [k8s.io] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] Lease lease API should be available [Conformance]
E2eNode Suite [k8s.io] Lease lease API should be available [Conformance]
E2eNode Suite [k8s.io] Lease lease API should be available [Conformance]
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: false) should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: false) should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: false) should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: true) should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: true) should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: true) should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] MemoryAllocatableEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] MemoryAllocatableEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] MemoryAllocatableEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive] DevicePlugin checks that when Kubelet restarts exclusive GPU assignation to pods is kept.
E2eNode Suite [k8s.io] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive] DevicePlugin checks that when Kubelet restarts exclusive GPU assignation to pods is kept.
E2eNode Suite [k8s.io] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive] DevicePlugin checks that when Kubelet restarts exclusive GPU assignation to pods is kept.
E2eNode Suite [k8s.io] Node Container Manager [Serial] Validate Node Allocatable [NodeFeature:NodeAllocatable] sets up the node and runs the test
E2eNode Suite [k8s.io] Node Container Manager [Serial] Validate Node Allocatable [NodeFeature:NodeAllocatable] sets up the node and runs the test
E2eNode Suite [k8s.io] Node Container Manager [Serial] Validate Node Allocatable [NodeFeature:NodeAllocatable] sets up the node and runs the test
E2eNode Suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
E2eNode Suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
E2eNode Suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
E2eNode Suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
E2eNode Suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
E2eNode Suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
E2eNode Suite [k8s.io] NodeProblemDetector [NodeFeature:NodeProblemDetector] [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors
E2eNode Suite [k8s.io] NodeProblemDetector [NodeFeature:NodeProblemDetector] [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors
E2eNode Suite [k8s.io] NodeProblemDetector [NodeFeature:NodeProblemDetector] [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors
E2eNode Suite [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
E2eNode Suite [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
E2eNode Suite [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
E2eNode Suite [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
E2eNode Suite [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
E2eNode Suite [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
E2eNode Suite [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
E2eNode Suite [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
E2eNode Suite [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
E2eNode Suite [k8s.io] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityMemoryEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityMemoryEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityMemoryEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe
E2eNode Suite [k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout
E2eNode Suite [k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout
E2eNode Suite [k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout
E2eNode Suite [k8s.io] Probing container should be restarted with a local redirect http liveness probe
E2eNode Suite [k8s.io] Probing container should be restarted with a local redirect http liveness probe
E2eNode Suite [k8s.io] Probing container should be restarted with a local redirect http liveness probe
E2eNode Suite [k8s.io] ResourceMetricsAPI when querying /resource/metrics should report resource usage through the v1alpha1 resouce metrics api
E2eNode Suite [k8s.io] ResourceMetricsAPI when querying /resource/metrics should report resource usage through the v1alpha1 resouce metrics api
E2eNode Suite [k8s.io] ResourceMetricsAPI when querying /resource/metrics should report resource usage through the v1alpha1 resouce metrics api
E2eNode Suite [k8s.io] Restart [Serial] [Slow] [Disruptive] [NodeFeature:ContainerRuntimeRestart] Container Runtime Network should recover from ip leak
E2eNode Suite [k8s.io] Restart [Serial] [Slow] [Disruptive] [NodeFeature:ContainerRuntimeRestart] Container Runtime Network should recover from ip leak
E2eNode Suite [k8s.io] Restart [Serial] [Slow] [Disruptive] [NodeFeature:ContainerRuntimeRestart] Container Runtime Network should recover from ip leak
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
E2eNode Suite [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host IPC namespace should not show the shared memory ID in the non-hostIPC containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host IPC namespace should not show the shared memory ID in the non-hostIPC containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host IPC namespace should not show the shared memory ID in the non-hostIPC containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host IPC namespace should show the shared memory ID in the host IPC containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host IPC namespace should show the shared memory ID in the host IPC containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host IPC namespace should show the shared memory ID in the host IPC containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host PID namespace should not show its pid in the non-hostpid containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host PID namespace should not show its pid in the non-hostpid containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host PID namespace should not show its pid in the non-hostpid containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host PID namespace should show its pid in the host PID namespace [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host PID namespace should show its pid in the host PID namespace [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host PID namespace should show its pid in the host PID namespace [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host network namespace should listen on same port in the host network containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host network namespace should listen on same port in the host network containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host network namespace should listen on same port in the host network containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host network namespace shouldn't show the same port in the non-hostnetwork containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host network namespace shouldn't show the same port in the non-hostnetwork containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host network namespace shouldn't show the same port in the non-hostnetwork containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] containers in pods using isolated PID namespaces should all receive PID 1
E2eNode Suite [k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] containers in pods using isolated PID namespaces should all receive PID 1
E2eNode Suite [k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] containers in pods using isolated PID namespaces should all receive PID 1
E2eNode Suite [k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] processes in containers sharing a pod namespace should be able to see each other [Alpha]
E2eNode Suite [k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] processes in containers sharing a pod namespace should be able to see each other [Alpha]
E2eNode Suite [k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] processes in containers sharing a pod namespace should be able to see each other [Alpha]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should *not* be restarted with a exec "cat /tmp/health" because startup probe delays it [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should *not* be restarted with a exec "cat /tmp/health" because startup probe delays it [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should *not* be restarted with a exec "cat /tmp/health" because startup probe delays it [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should be restarted with a exec "cat /tmp/health" after startup probe succeeds it [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should be restarted with a exec "cat /tmp/health" after startup probe succeeds it [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should be restarted with a exec "cat /tmp/health" after startup probe succeeds it [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should be restarted with a exec "cat /tmp/health" because startup probe does not delay it long enough [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should be restarted with a exec "cat /tmp/health" because startup probe does not delay it long enough [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should be restarted with a exec "cat /tmp/health" because startup probe does not delay it long enough [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should not be ready until startupProbe succeeds [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should not be ready until startupProbe succeeds [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should not be ready until startupProbe succeeds [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted
E2eNode Suite [k8s.io] SystemNodeCriticalPod [Slow] [Serial] [Disruptive] [NodeFeature:SystemNodeCriticalPod] when create a system-node-critical pod should not be evicted upon DiskPressure
E2eNode Suite [k8s.io] SystemNodeCriticalPod [Slow] [Serial] [Disruptive] [NodeFeature:SystemNodeCriticalPod] when create a system-node-critical pod should not be evicted upon DiskPressure
E2eNode Suite [k8s.io] SystemNodeCriticalPod [Slow] [Serial] [Disruptive] [NodeFeature:SystemNodeCriticalPod] when create a system-node-critical pod should not be evicted upon DiskPressure
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage][NodeFeature:VolumeSubpathEnvExpansion]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage][NodeFeature:VolumeSubpathEnvExpansion]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage][NodeFeature:VolumeSubpathEnvExpansion]
E2eNode Suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: error while ConfigMap is absent: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: error while ConfigMap is absent: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: error while ConfigMap is absent: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: 100 update stress test: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: 100 update stress test: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: 100 update stress test: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap.KubeletConfigKey: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap.KubeletConfigKey: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap.KubeletConfigKey: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: state transitions: status and events should match expectations
E2eNode Suite [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]
E2eNode Suite [sig-node] CPU Manager [Serial] [Feature:CPUManager][NodeAlphaFeature:CPUManager] With kubeconfig updated with static CPU Manager policy run the CPU Manager tests should assign CPUs as expected based on the Pod spec
E2eNode Suite [sig-node] CPU Manager [Serial] [Feature:CPUManager][NodeAlphaFeature:CPUManager] With kubeconfig updated with static CPU Manager policy run the CPU Manager tests should assign CPUs as expected based on the Pod spec
E2eNode Suite [sig-node] CPU Manager [Serial] [Feature:CPUManager][NodeAlphaFeature:CPUManager] With kubeconfig updated with static CPU Manager policy run the CPU Manager tests should assign CPUs as expected based on the Pod spec
E2eNode Suite [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
E2eNode Suite [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
E2eNode Suite [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
E2eNode Suite [sig-node] ConfigMap should update ConfigMap successfully
E2eNode Suite [sig-node] ConfigMap should update ConfigMap successfully
E2eNode Suite [sig-node] ConfigMap should update ConfigMap successfully
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When all containers in pod are missing should complete pod sandbox clean up based on the information in sandbox checkpoint
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When all containers in pod are missing should complete pod sandbox clean up based on the information in sandbox checkpoint
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When all containers in pod are missing should complete pod sandbox clean up based on the information in sandbox checkpoint
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When checkpoint file is corrupted should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When checkpoint file is corrupted should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When checkpoint file is corrupted should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When pod sandbox checkpoint is missing should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When pod sandbox checkpoint is missing should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When pod sandbox checkpoint is missing should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should clean up pod sandbox checkpoint after pod deletion
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should clean up pod sandbox checkpoint after pod deletion
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should clean up pod sandbox checkpoint after pod deletion
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should remove dangling checkpoint file
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should remove dangling checkpoint file
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should remove dangling checkpoint file
E2eNode Suite [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
E2eNode Suite [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
E2eNode Suite [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeFeature:HugePages] With config updated with hugepages feature enabled should assign hugepages as expected based on the Pod spec
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeFeature:HugePages] With config updated with hugepages feature enabled should assign hugepages as expected based on the Pod spec
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeFeature:HugePages] With config updated with hugepages feature enabled should assign hugepages as expected based on the Pod spec
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads TensorFlow workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads TensorFlow workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads TensorFlow workload
E2eNode Suite [sig-node] PodPidsLimit [Serial] [Feature:SupportPodPidsLimit][NodeFeature:SupportPodPidsLimit] With config updated with pids feature enabled should set pids.max for Pod
E2eNode Suite [sig-node] PodPidsLimit [Serial] [Feature:SupportPodPidsLimit][NodeFeature:SupportPodPidsLimit] With config updated with pids feature enabled should set pids.max for Pod
E2eNode Suite [sig-node] PodPidsLimit [Serial] [Feature:SupportPodPidsLimit][NodeFeature:SupportPodPidsLimit] With config updated with pids feature enabled should set pids.max for Pod
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 0 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 0 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 0 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 105 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 105 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 105 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 35 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 35 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 35 pods per node [Benchmark]
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass
E2eNode Suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
E2eNode Suite [sig-storage] GCP Volumes GlusterFS should be mountable
E2eNode Suite [sig-storage] GCP Volumes GlusterFS should be mountable
E2eNode Suite [sig-storage] GCP Volumes GlusterFS should be mountable
E2eNode Suite [sig-storage] GCP Volumes NFSv3 should be mountable for NFSv3
E2eNode Suite [sig-storage] GCP Volumes NFSv3 should be mountable for NFSv3
E2eNode Suite [sig-storage] GCP Volumes NFSv3 should be mountable for NFSv3
E2eNode Suite [sig-storage] GCP Volumes NFSv4 should be mountable for NFSv4
E2eNode Suite [sig-storage] GCP Volumes NFSv4 should be mountable for NFSv4
E2eNode Suite [sig-storage] GCP Volumes NFSv4 should be mountable for NFSv4
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]