This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 11 failed / 44 succeeded
Started2020-01-15 18:02
Elapsed4h4m
Revision
Buildergke-prow-default-pool-cf4891d4-e8h9
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/3780f975-eb29-4975-8a38-5d2869c9cdf2/targets/test'}}
pod229b4db8-37c1-11ea-8603-da2f7a5855b4
resultstorehttps://source.cloud.google.com/results/invocations/3780f975-eb29-4975-8a38-5d2869c9cdf2/targets/test
infra-commitbee34c5da
job-versionv1.14.11-beta.1
pod229b4db8-37c1-11ea-8603-da2f7a5855b4
repok8s.io/kubernetes
repo-commit6a71926e65a090cfb803175d6a0d57385a8ec982
repos{u'k8s.io/kubernetes': u'release-1.14'}
revisionv1.14.11-beta.1

Test Failures


Node Tests 4h2m

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=ubuntu-image-validation --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeAlphaFeature:.+\]" --test_args=--feature-gates=DynamicKubeletConfig=true --test-timeout=5h0m0s --images=ubuntu-gke-1804-d1809-0-v20200114 --image-project=ubuntu-os-gke-cloud-devel: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000 2m22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Container\sManager\sMisc\s\[Serial\]\sValidate\sOOM\sscore\sadjustments\s\[NodeFeature\:OOMScoreAdj\]\sonce\sthe\snode\sis\ssetup\s\spod\sinfra\scontainers\soom\-score\-adj\sshould\sbe\s\-998\sand\sbest\seffort\scontainer\'s\sshould\sbe\s1000$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:98
Timed out after 120.000s.
Expected
    <*errors.errorString | 0xc0016fd7b0>: {
        s: "expected only one serve_hostname process; found 0",
    }
to be nil
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:150
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20200114_01.xml

Filter through log files | View test history on testgrid


[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval 5m22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Density\s\[Serial\]\s\[Slow\]\screate\sa\sbatch\sof\spods\slatency\/resource\sshould\sbe\swithin\slimit\swhen\screate\s10\spods\swith\s0s\sinterval$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:63
Unexpected error:
    <*errors.errorString | 0xc000085760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:110
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20200114_01.xml

Filter through log files | View test history on testgrid


[k8s.io] Docker features [Feature:Docker][Legacy:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts 5m22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Docker\sfeatures\s\[Feature\:Docker\]\[Legacy\:Docker\]\swhen\slive\-restore\sis\senabled\s\[Serial\]\s\[Slow\]\s\[Disruptive\]\scontainers\sshould\snot\sbe\sdisrupted\swhen\sthe\sdaemon\sshuts\sdown\sand\srestarts$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/docker_test.go:41
Unexpected error:
    <*errors.errorString | 0xc000085760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:110
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20200114_01.xml

Filter through log files | View test history on testgrid


[k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container 5m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=GarbageCollect\s\[Serial\]\[NodeFeature\:GarbageCollect\]\sGarbage\sCollection\sTest\:\sOne\sNon\-restarting\sContainer\sShould\seventually\sgarbage\scollect\scontainers\swhen\swe\sexceed\sthe\snumber\sof\sdead\scontainers\sper\scontainer$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:170
Unexpected error:
    <*errors.errorString | 0xc000085760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:110
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20200114_01.xml

Filter through log files | View test history on testgrid


[k8s.io] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 7m35s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=InodeEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:454
Unexpected error:
    <*errors.errorString | 0xc000085760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:110
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20200114_01.xml

Filter through log files | View test history on testgrid


[k8s.io] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods 6m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=PriorityPidEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sPIDPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:454
Unexpected error:
    <*errors.errorString | 0xc000085760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:110
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20200114_01.xml

Filter through log files | View test history on testgrid


[sig-node] HugePages [Serial] [Feature:HugePages][NodeFeature:HugePages] With config updated with hugepages feature enabled should assign hugepages as expected based on the Pod spec 5m26s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-node\]\sHugePages\s\[Serial\]\s\[Feature\:HugePages\]\[NodeFeature\:HugePages\]\sWith\sconfig\supdated\swith\shugepages\sfeature\senabled\sshould\sassign\shugepages\sas\sexpected\sbased\son\sthe\sPod\sspec$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/hugepages_test.go:162
Unexpected error:
    <*errors.errorString | 0xc0007ed8d0>: {
        s: "Gave up after waiting 5m0s for pod \"podd1ae63e3-37c8-11ea-8e87-42010a8a0038\" to be \"success or failure\"",
    }
    Gave up after waiting 5m0s for pod "podd1ae63e3-37c8-11ea-8e87-42010a8a0038" to be "success or failure"
occurred
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/hugepages_test.go:190
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20200114_01.xml

Find podd1ae63e3-37c8-11ea-8e87-42010a8a0038 mentions in log files | View test history on testgrid


[sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload 28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-node\]\sNode\sPerformance\sTesting\s\[Serial\]\s\[Slow\]\sRun\snode\sperformance\stesting\swith\spre\-defined\sworkloads\sNAS\sparallel\sbenchmark\s\(NPB\)\ssuite\s\-\sEmbarrassingly\sParallel\s\(EP\)\sworkload$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:115
Unexpected error:
    <*errors.errorString | 0xc0005b6b00>: {
        s: "pod ran to completion",
    }
    pod ran to completion
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:110
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20200114_01.xml

Find ran mentions in log files | View test history on testgrid


[sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload 8.29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-node\]\sNode\sPerformance\sTesting\s\[Serial\]\s\[Slow\]\sRun\snode\sperformance\stesting\swith\spre\-defined\sworkloads\sNAS\sparallel\sbenchmark\s\(NPB\)\ssuite\s\-\sInteger\sSort\s\(IS\)\sworkload$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:106
Unexpected error:
    <*errors.errorString | 0xc0005b6b00>: {
        s: "pod ran to completion",
    }
    pod ran to completion
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:110
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20200114_01.xml

Find ran mentions in log files | View test history on testgrid


[sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads TensorFlow workload 28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-node\]\sNode\sPerformance\sTesting\s\[Serial\]\s\[Slow\]\sRun\snode\sperformance\stesting\swith\spre\-defined\sworkloads\sTensorFlow\sworkload$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:124
Unexpected error:
    <*errors.errorString | 0xc0005b6b00>: {
        s: "pod ran to completion",
    }
    pod ran to completion
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:110
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20200114_01.xml

Find ran mentions in log files | View test history on testgrid


Show 44 Passed Tests

Show 251 Skipped Tests