This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 4 failed / 314 succeeded
Started2020-10-23 22:32
Elapsed1h53m
Revision
Builder9f7f97c0-157f-11eb-b256-6ee25ea2e440
infra-commitd88010efd
job-versionv1.20.0-alpha.3.101+237dae5a5efcc1
revisionv1.20.0-alpha.3.101+237dae5a5efcc1

Test Failures


Kubernetes e2e suite [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] 1.30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sKubectl\sserver\-side\sdry\-run\sshould\scheck\sif\skubectl\scan\sdry\-run\supdate\sPods\s\[Conformance\]$'
/workspace/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 24 00:00:07.568: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running /workspace/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-9081 replace -f - --dry-run=server:\nCommand stdout:\n\nstderr:\nError from server (Conflict): error when replacing \"STDIN\": Operation cannot be fulfilled on pods \"e2e-test-httpd-pod\": the object has been modified; please apply your changes to the latest version and try again\n\nerror:\nexit status 1",
        },
        Code: 1,
    }
    error running /workspace/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-9081 replace -f - --dry-run=server:
    Command stdout:
    
    stderr:
    Error from server (Conflict): error when replacing "STDIN": Operation cannot be fulfilled on pods "e2e-test-httpd-pod": the object has been modified; please apply your changes to the latest version and try again
    
    error:
    exit status 1
occurred
/workspace/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:598
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] 1m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-scheduling\]\sSchedulerPreemption\s\[Serial\]\svalidates\sbasic\spreemption\sworks\s\[Conformance\]$'
/workspace/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 24 00:07:57.505: We need at least two pods to be created butall nodes are already heavily utilized, so preemption tests cannot be run
/workspace/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] 1m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-scheduling\]\sSchedulerPreemption\s\[Serial\]\svalidates\slower\spriority\spod\spreemption\sby\scritical\spod\s\[Conformance\]$'
/workspace/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 24 00:15:43.186: We need at least two pods to be created butall nodes are already heavily utilized, so preemption tests cannot be run
/workspace/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Test 1h43m

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Conformance\] --ginkgo.skip=\[sig\-apps\]\sDaemon\sset\s\[Serial\]\sshould\srollback\swithout\sunnecessary\srestarts\s\[Conformance\] --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 314 Passed Tests

Show 4924 Skipped Tests