This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 7 failed / 46 succeeded
Started2020-01-16 03:08
Elapsed4h19m
Revision
Buildergke-prow-default-pool-cf4891d4-hmsd
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/3130dda8-afcc-406a-b370-e0a874dd5d43/targets/test'}}
pod6b631e45-380d-11ea-8f3e-66e48c863062
resultstorehttps://source.cloud.google.com/results/invocations/3130dda8-afcc-406a-b370-e0a874dd5d43/targets/test
infra-commit70a5174aa
job-versionv1.16.5-beta.1.51+e7f962ba86f4ce
pod6b631e45-380d-11ea-8f3e-66e48c863062
repok8s.io/kubernetes
repo-commite7f962ba86f4ce7033828210ca3556393c377bcc
repos{u'k8s.io/kubernetes': u'release-1.16'}
revisionv1.16.5-beta.1.51+e7f962ba86f4ce

Test Failures


E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000 2m12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sContainer\sManager\sMisc\s\[Serial\]\sValidate\sOOM\sscore\sadjustments\s\[NodeFeature\:OOMScoreAdj\]\sonce\sthe\snode\sis\ssetup\s\spod\sinfra\scontainers\soom\-score\-adj\sshould\sbe\s\-998\sand\sbest\seffort\scontainer\'s\sshould\sbe\s1000$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:99
Timed out after 120.000s.
Expected
    <*errors.errorString | 0xc000e17260>: {
        s: "expected only one serve_hostname process; found 0",
    }
to be nil
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:151
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20200114_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod 6.22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sCriticalPod\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:CriticalPod\]\swhen\swe\sneed\sto\sadmit\sa\scritical\spod\sshould\sbe\sable\sto\screate\sand\sdelete\sa\scritical\spod$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:55
failed to create PriorityClasses with an error: PriorityClass.scheduling.k8s.io "critical-pod-test-high-priority" is invalid: value: Forbidden: maximum allowed value of a user defined priority is 1000000000
Expected
    <bool>: false
to be true
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:89
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20200114_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin Verifies the Kubelet device plugin functionality. 1m6s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sDevice\sPlugin\s\[Feature\:DevicePluginProbe\]\[NodeFeature\:DevicePluginProbe\]\[Serial\]\sDevicePlugin\sVerifies\sthe\sKubelet\sdevice\splugin\sfunctionality\.$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/device_plugin_test.go:65
Expected
    <int>: 8
to equal
    <int>: 2
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1339
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20200114_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 16m58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sInodeEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:469
Failed after 63.661s.
Expected
    <*errors.StatusError | 0xc000df2820>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "etcdserver: request timed out",
            Reason: "",
            Details: nil,
            Code: 500,
        },
    }
to be nil
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:517
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20200114_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 6m53s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:455
Unexpected error:
    <*errors.errorString | 0xc00021ad30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20200114_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] Restart [Serial] [Slow] [Disruptive] [NodeFeature:ContainerRuntimeRestart] Container Runtime Network should recover from ip leak 24m25s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sRestart\s\[Serial\]\s\[Slow\]\s\[Disruptive\]\s\[NodeFeature\:ContainerRuntimeRestart\]\sContainer\sRuntime\sNetwork\sshould\srecover\sfrom\sip\sleak$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/restart_test.go:83
Unexpected error:
    <*errors.errorString | 0xc00021ad30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_collector.go:381
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20200114_01.xml

Filter through log files | View test history on testgrid


Node Tests 4h18m

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=ubuntu-image-validation --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeAlphaFeature:.+\]" --test_args=--feature-gates=DynamicKubeletConfig=true --test-timeout=5h0m0s --images=ubuntu-gke-1804-d1809-0-v20200114 --image-project=ubuntu-os-gke-cloud-devel: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 46 Passed Tests

Show 268 Skipped Tests