This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 13 failed / 40 succeeded
Started2020-03-29 13:56
Elapsed4h48m
Revision
Builderf71660ef-71c4-11ea-a0b0-0e43f1618cf3
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/5d8c5750-8a82-4b61-9171-c3b300558fa9/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/5d8c5750-8a82-4b61-9171-c3b300558fa9/targets/test
infra-commitfea5af139
job-versionv1.16.9-beta.0.17+bb616cba1cc983
repok8s.io/kubernetes
repo-commitbb616cba1cc983fcac3166d1b8004b67cf69550c
repos{u'k8s.io/kubernetes': u'release-1.16'}
revisionv1.16.9-beta.0.17+bb616cba1cc983

Test Failures


E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000 2m28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sContainer\sManager\sMisc\s\[Serial\]\sValidate\sOOM\sscore\sadjustments\s\[NodeFeature\:OOMScoreAdj\]\sonce\sthe\snode\sis\ssetup\s\spod\sinfra\scontainers\soom\-score\-adj\sshould\sbe\s\-998\sand\sbest\seffort\scontainer\'s\sshould\sbe\s1000$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:99
Timed out after 120.000s.
Expected
    <*errors.errorString | 0xc0008b36e0>: {
        s: "expected only one serve_hostname process; found 0",
    }
to be nil
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:151
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-1-15-v20200324_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod 6.41s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sCriticalPod\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:CriticalPod\]\swhen\swe\sneed\sto\sadmit\sa\scritical\spod\sshould\sbe\sable\sto\screate\sand\sdelete\sa\scritical\spod$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:55
failed to create PriorityClasses with an error: PriorityClass.scheduling.k8s.io "critical-pod-test-high-priority" is invalid: value: Forbidden: maximum allowed value of a user defined priority is 1000000000
Expected
    <bool>: false
to be true
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:89
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-1-15-v20200324_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin Verifies the Kubelet device plugin functionality. 5m38s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sDevice\sPlugin\s\[Feature\:DevicePluginProbe\]\[NodeFeature\:DevicePluginProbe\]\[Serial\]\sDevicePlugin\sVerifies\sthe\sKubelet\sdevice\splugin\sfunctionality\.$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/device_plugin_test.go:65
Unexpected error:
    <*errors.errorString | 0xc000218d30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-1-15-v20200324_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars 5m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sDownward\sAPI\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:EphemeralStorage\]\sDownward\sAPI\stests\sfor\slocal\sephemeral\sstorage\sshould\sprovide\scontainer\'s\slimits\.ephemeral\-storage\sand\srequests\.ephemeral\-storage\sas\senv\svars$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:293
Unexpected error:
    <*errors.errorString | 0xc000b41ed0>: {
        s: "expected pod \"downward-api-b8a393c6-20ae-4335-a7f6-9f095965c28b\" success: Gave up after waiting 5m0s for pod \"downward-api-b8a393c6-20ae-4335-a7f6-9f095965c28b\" to be \"success or failure\"",
    }
    expected pod "downward-api-b8a393c6-20ae-4335-a7f6-9f095965c28b" success: Gave up after waiting 5m0s for pod "downward-api-b8a393c6-20ae-4335-a7f6-9f095965c28b" to be "success or failure"
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-1-15-v20200324_01.xml

Find downward-api-b8a393c6-20ae-4335-a7f6-9f095965c28b mentions in log files | View test history on testgrid


E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container 15m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sGarbageCollect\s\[Serial\]\[NodeFeature\:GarbageCollect\]\sGarbage\sCollection\sTest\:\sMany\sPods\swith\sMany\sRestarting\sContainers\sShould\seventually\sgarbage\scollect\scontainers\swhen\swe\sexceed\sthe\snumber\sof\sdead\scontainers\sper\scontainer$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:170
Unexpected error:
    <*errors.errorString | 0xc000218d30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-1-15-v20200324_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container 15m11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sGarbageCollect\s\[Serial\]\[NodeFeature\:GarbageCollect\]\sGarbage\sCollection\sTest\:\sMany\sRestarting\sContainers\sShould\seventually\sgarbage\scollect\scontainers\swhen\swe\sexceed\sthe\snumber\sof\sdead\scontainers\sper\scontainer$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:170
Unexpected error:
    <*errors.errorString | 0xc000218d30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-1-15-v20200324_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 17m44s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sInodeEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:523
Mar 29 18:26:35.803: Failed to delete pod "container-inode-hog-pod": etcdserver: request timed out
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:175
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-1-15-v20200324_01.xml

Find container-inode-hog-pod mentions in log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods 6m30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageCapacityIsolationEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\s\[Feature\:LocalStorageCapacityIsolation\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sevictions\sdue\sto\spod\slocal\sstorage\sviolations\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:455
Unexpected error:
    <*errors.errorString | 0xc000218d30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-1-15-v20200324_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 6m31s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:455
Unexpected error:
    <*errors.errorString | 0xc000218d30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-1-15-v20200324_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods 6m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sPriorityPidEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sPIDPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:455
Unexpected error:
    <*errors.errorString | 0xc000218d30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-1-15-v20200324_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeFeature:HugePages] With config updated with hugepages feature enabled should assign hugepages as expected based on the Pod spec 5m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sHugePages\s\[Serial\]\s\[Feature\:HugePages\]\[NodeFeature\:HugePages\]\sWith\sconfig\supdated\swith\shugepages\sfeature\senabled\sshould\sassign\shugepages\sas\sexpected\sbased\son\sthe\sPod\sspec$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/hugepages_test.go:141
Unexpected error:
    <*errors.errorString | 0xc001139450>: {
        s: "Gave up after waiting 5m0s for pod \"pod9991d476-dd78-4573-a77b-154a1df20718\" to be \"success or failure\"",
    }
    Gave up after waiting 5m0s for pod "pod9991d476-dd78-4573-a77b-154a1df20718" to be "success or failure"
occurred
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/hugepages_test.go:169
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-1-15-v20200324_01.xml

Find pod9991d476-dd78-4573-a77b-154a1df20718 mentions in log files | View test history on testgrid


E2eNode Suite [sig-node] PodPidsLimit [Serial] [Feature:SupportPodPidsLimit][NodeFeature:SupportPodPidsLimit] With config updated with pids feature enabled should set pids.max for Pod 5m43s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sPodPidsLimit\s\[Serial\]\s\[Feature\:SupportPodPidsLimit\]\[NodeFeature\:SupportPodPidsLimit\]\sWith\sconfig\supdated\swith\spids\sfeature\senabled\sshould\sset\spids\.max\sfor\sPod$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/pids_test.go:108
Unexpected error:
    <*errors.errorString | 0xc0001935a0>: {
        s: "Gave up after waiting 5m0s for pod \"pod46eb79ed-b82b-4752-bb3c-247758d21bac\" to be \"success or failure\"",
    }
    Gave up after waiting 5m0s for pod "pod46eb79ed-b82b-4752-bb3c-247758d21bac" to be "success or failure"
occurred
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/pids_test.go:135
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-1-15-v20200324_01.xml

Find pod46eb79ed-b82b-4752-bb3c-247758d21bac mentions in log files | View test history on testgrid


Node Tests 4h46m

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=ubuntu-image-validation --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeAlphaFeature:.+\]" --test_args=--feature-gates=DynamicKubeletConfig=true --test-timeout=5h0m0s --images=ubuntu-gke-1804-1-15-v20200324 --image-project=ubuntu-os-gke-cloud-devel: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 40 Passed Tests

Show 268 Skipped Tests