This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 72 failed / 33 succeeded
Started2020-01-16 22:45
Elapsed15h6m
Revision
Buildergke-prow-default-pool-cf4891d4-hdxc
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/91c6f77f-f333-4bed-aeea-5d24bf439a41/targets/test'}}
podc8c45a74-38b1-11ea-a8ce-4a816efc965f
resultstorehttps://source.cloud.google.com/results/invocations/91c6f77f-f333-4bed-aeea-5d24bf439a41/targets/test
infra-commit5f81c02cf
job-versionv1.15.9-beta.0.1+576595374055cc
master_os_image
node_os_imagecos-77-12371-89-0
podc8c45a74-38b1-11ea-a8ce-4a816efc965f
revisionv1.15.9-beta.0.1+576595374055cc

Test Failures


Kubernetes e2e suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars 10m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sDownward\sAPI\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:EphemeralStorage\]\sDownward\sAPI\stests\sfor\slocal\sephemeral\sstorage\sshould\sprovide\scontainer\'s\slimits\.ephemeral\-storage\sand\srequests\.ephemeral\-storage\sas\senv\svars$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 17 08:35:46.644: Couldn't delete ns: "downward-api-820": namespace downward-api-820 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace downward-api-820 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] EquivalenceCache [Serial] validates pod affinity works properly when new replica pod is scheduled 16m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sEquivalenceCache\s\[Serial\]\svalidates\spod\saffinity\sworks\sproperly\swhen\snew\sreplica\spod\sis\sscheduled$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/equivalence_cache_predicates.go:52
Unexpected error:
    <*errors.errorString | 0xc002870c00>: {
        s: "11 / 23 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nThere are too many bad pods. Please check log for details.",
    }
    11 / 23 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    There are too many bad pods. Please check log for details.
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/equivalence_cache_predicates.go:74