This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 33 failed / 739 succeeded
Started2020-01-07 23:43
Elapsed1h26m
Revision
Buildergke-prow-default-pool-cf4891d4-9zv1
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/32a31a59-3eaa-40b9-b0a8-7698759af594/targets/test'}}
pod53ac24f7-31a7-11ea-9709-02f27a93e62e
resultstorehttps://source.cloud.google.com/results/invocations/32a31a59-3eaa-40b9-b0a8-7698759af594/targets/test
infra-commit98b6e6085
job-versionv1.14.11-beta.1
master_os_imagecos-beta-73-11647-64-0
node_os_imageubuntu-gke-1804-d1809-0-v20191218
pod53ac24f7-31a7-11ea-9709-02f27a93e62e
revisionv1.14.11-beta.1

Test Failures


Test 1h8m

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] 13m11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Container\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\sprestop\sexec\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
wait for pod "pod-with-prestop-exec-hook" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc0002b7420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:175
				
				Click to see stdout/stderrfrom junit_27.xml

Find pod-with-prestop-exec-hook mentions in log files | View test history on testgrid


[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] 15m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Security\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\sallow\sprivilege\sescalation\swhen\snot\sexplicitly\sset\sand\suid\s\!\=\s0\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:236
wait for pod "alpine-nnp-nil-b8367cde-31ab-11ea-a69a-0afc9e7a9191" to success
Expected success, but got an error:
    <*errors.errorString | 0xc0003f0ea0>: {
        s: "Gave up after waiting 5m0s for pod \"alpine-nnp-nil-b8367cde-31ab-11ea-a69a-0afc9e7a9191\" to be \"success or failure\"",
    }
    Gave up after waiting 5m0s for pod "alpine-nnp-nil-b8367cde-31ab-11ea-a69a-0afc9e7a9191" to be "success or failure"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:229
				
				Click to see stdout/stderrfrom junit_11.xml

Find alpine-nnp-nil-b8367cde-31ab-11ea-a69a-0afc9e7a9191 mentions in log files | View test history on testgrid


[k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. 15m24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-node\]\skubelet\s\[k8s\.io\]\s\[sig\-node\]\sClean\sup\spods\son\snode\skubelet\sshould\sbe\sable\sto\sdelete\s10\spods\sper\snode\sin\s1m0s\.$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:312
Unexpected error:
    <*errors.errorString | 0xc0021f3cf0>: {
        s: "Only 21 pods started out of 30",
    }
    Only 21 pods started out of 30
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:325
				
				Click to see stdout/stderrfrom junit_28.xml

Filter through log files | View test history on testgrid


[sig-api-machinery] AdmissionWebhook Should honor timeout 5m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\shonor\stimeout$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:94
waiting for the deployment status valid%!(EXTRA string=gcr.io/kubernetes-e2e-test-images/webhook:1.14v1, string=sample-webhook-deployment, string=webhook-3036)
Unexpected error:
    <*errors.errorString | 0xc001914960>: {
        s: "error waiting for deployment \"sample-webhook-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714039088, loc:(*time.Location)(0x8806100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714039088, loc:(*time.Location)(0x8806100)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714039088, loc:(*time.Location)(0x8806100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714039088, loc:(*time.Location)(0x8806100)}}, Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"sample-webhook-deployment-55f5b748bb\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}",
    }
    error waiting for deployment "sample-webhook-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714039088, loc:(*time.Location)(0x8806100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714039088, loc:(*time.Location)(0x8806100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714039088, loc:(*time.Location)(0x8806100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714039088, loc:(*time.Location)(0x8806100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-55f5b748bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:355
				
				Click to see stdout/stderrfrom junit_21.xml

Filter through log files | View test history on testgrid


[sig-apps] Job should delete a job 39m6s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-apps\]\sJob\sshould\sdelete\sa\sjob$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:102
Unexpected error:
    <*errors.errorString | 0xc0007aaef0>: {
        s: "error while waiting for pods gone foo: there are 1 pods left. E.g. \"foo-cqsqt\" on node \"test-be0876f9fb-minion-group-97bz\"",
    }
    error while waiting for pods gone foo: there are 1 pods left. E.g. "foo-cqsqt" on node "test-be0876f9fb-minion-group-97bz"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:113
				
				Click to see stdout/stderrfrom junit_02.xml

Filter through log files | View test history on testgrid


[sig-cli] Kubectl Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects 15m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-cli\]\sKubectl\sPort\sforwarding\s\[k8s\.io\]\sWith\sa\sserver\slistening\son\slocalhost\s\[k8s\.io\]\sthat\sexpects\sNO\sclient\srequest\sshould\ssupport\sa\sclient\sthat\sconnects\,\ssends\sDATA\,\sand\sdisconnects$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:470
Jan  8 00:16:46.031: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:209
				
				Click to see stdout/stderrfrom junit_17.xml

Filter through log files | View test history on testgrid


[sig-network] Network should set TCP CLOSE_WAIT timeout 5m12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-network\]\sNetwork\sshould\sset\sTCP\sCLOSE\_WAIT\stimeout$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:50
Unexpected error:
    <*errors.errorString | 0xc0002a7420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:110
				
				Click to see stdout/stderrfrom junit_02.xml

Filter through log files | View test history on testgrid


[sig-network] Services should be able to up and down services 6m11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-network\]\sServices\sshould\sbe\sable\sto\sup\sand\sdown\sservices$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:314
Unexpected error:
    <*errors.errorString | 0xc0002cf410>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3500