This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 5 failed / 48 succeeded
Started2020-01-11 15:24
Elapsed4h4m
Revision
Buildergke-prow-default-pool-cf4891d4-l1tg
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/afbe2b37-8bd8-410f-ab04-6a840f329fbf/targets/test'}}
pod523c2357-3486-11ea-9fef-d200904e1a96
resultstorehttps://source.cloud.google.com/results/invocations/afbe2b37-8bd8-410f-ab04-6a840f329fbf/targets/test
infra-commitb82ca85d5
job-versionv1.15.8-beta.1.30+14ede42c4fe699
pod523c2357-3486-11ea-9fef-d200904e1a96
repok8s.io/kubernetes
repo-commit14ede42c4fe699a7078b566d89abc160f26857a2
repos{u'k8s.io/kubernetes': u'release-1.15'}
revisionv1.15.8-beta.1.30+14ede42c4fe699

Test Failures


E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container 15m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sGarbageCollect\s\[Serial\]\[NodeFeature\:GarbageCollect\]\sGarbage\sCollection\sTest\:\sMany\sRestarting\sContainers\sShould\seventually\sgarbage\scollect\scontainers\swhen\swe\sexceed\sthe\snumber\sof\sdead\scontainers\sper\scontainer$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:170
Unexpected error:
    <*errors.errorString | 0xc000214500>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1703-0-v20200110_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods 6m38s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageCapacityIsolationEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\s\[Feature\:LocalStorageCapacityIsolation\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sevictions\sdue\sto\spod\slocal\sstorage\sviolations\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:455
Unexpected error:
    <*errors.errorString | 0xc000214500>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1703-0-v20200110_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 6m13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageSoftEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:523
Unexpected error:
    <*errors.errorString | 0xc0004151e0>: {
        s: "pod ran to completion",
    }
    pod ran to completion
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1703-0-v20200110_01.xml

Find ran mentions in log files | View test history on testgrid


E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node 5m22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sResource\-usage\s\[Serial\]\s\[Slow\]\sregular\sresource\susage\stracking\sresource\stracking\sfor\s10\spods\sper\snode$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:49
Unexpected error:
    <*errors.errorString | 0xc000214500>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1703-0-v20200110_01.xml

Filter through log files | View test history on testgrid


Node Tests 4h2m

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=ubuntu-image-validation --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeAlphaFeature:.+\]" --test_args=--feature-gates=DynamicKubeletConfig=true --test-timeout=5h0m0s --images=ubuntu-gke-1804-d1703-0-v20200110 --image-project=ubuntu-os-gke-cloud-devel: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 48 Passed Tests

Show 264 Skipped Tests