This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 7 failed / 89 succeeded
Started2020-01-16 00:26
Elapsed4h20m
Revision
Buildergke-prow-default-pool-cf4891d4-tv62
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/99a07417-8fb6-4485-844f-6610a46f3b26/targets/test'}}
podc9fc0dc9-37f6-11ea-8f3e-66e48c863062
resultstorehttps://source.cloud.google.com/results/invocations/99a07417-8fb6-4485-844f-6610a46f3b26/targets/test
infra-commit70a5174aa
job-versionv1.18.0-alpha.1.784+82c9e5c814eb7a
podc9fc0dc9-37f6-11ea-8f3e-66e48c863062
repok8s.io/kubernetes
repo-commit82c9e5c814eb7acc6cc0a090c057294d0667ad66
repos{u'k8s.io/kubernetes': u'master', u'github.com/containerd/cri': u'master'}
revisionv1.18.0-alpha.1.784+82c9e5c814eb7a

Test Failures


E2eNode Suite [k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod 0.05s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sCriticalPod\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:CriticalPod\]\swhen\swe\sneed\sto\sadmit\sa\scritical\spod\sshould\sbe\sable\sto\screate\sand\sdelete\sa\scritical\spod$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:54
failed to create PriorityClasses with an error: PriorityClass.scheduling.k8s.io "critical-pod-test-high-priority" is invalid: value: Forbidden: maximum allowed value of a user defined priority is 1000000000
Expected
    <bool>: false
to equal
    <bool>: true
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:88
				
				Click to see stdout/stderrfrom junit_cos-stable_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod 0.15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sCriticalPod\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:CriticalPod\]\swhen\swe\sneed\sto\sadmit\sa\scritical\spod\sshould\sbe\sable\sto\screate\sand\sdelete\sa\scritical\spod$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:54
failed to create PriorityClasses with an error: PriorityClass.scheduling.k8s.io "critical-pod-test-high-priority" is invalid: value: Forbidden: maximum allowed value of a user defined priority is 1000000000
Expected
    <bool>: false
to equal
    <bool>: true
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:88
				
				Click to see stdout/stderrfrom junit_ubuntu_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 10m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sInodeEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:522
wait for pod "innocent-pod" to disappear
Expected success, but got an error:
    <*errors.StatusError | 0xc00098fea0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "etcdserver: request timed out",
            Reason: "",
            Details: nil,
            Code: 500,
        },
    }
    etcdserver: request timed out
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:145
				
				Click to see stdout/stderrfrom junit_ubuntu_01.xml

Find innocent-pod mentions in log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 14m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:468
Timed out after 600.000s.
Expected
    <*errors.errorString | 0xc0019453c0>: {
        s: "pods that should be evicted are still running",
    }
to be nil
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:490
				
				Click to see stdout/stderrfrom junit_cos-stable_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 14m26s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageSoftEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:468
Timed out after 600.000s.
Expected
    <*errors.errorString | 0xc001b93fd0>: {
        s: "pods that should be evicted are still running",
    }
to be nil
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:490
				
				Click to see stdout/stderrfrom junit_cos-stable_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 14m11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sPriorityLocalStorageEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:468
Timed out after 600.000s.
Expected
    <*errors.errorString | 0xc0015e4f20>: {
        s: "pods that should be evicted are still running",
    }
to be nil
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:490