This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 65 failed / 262 succeeded
Started2020-01-09 18:30
Elapsed15h6m
Revision
Buildergke-prow-default-pool-cf4891d4-r4zq
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/df4aad6a-23c4-4bec-be83-bcb5979be7bd/targets/test'}}
pod01dd26db-330e-11ea-a4dc-8213d962824c
resultstorehttps://source.cloud.google.com/results/invocations/df4aad6a-23c4-4bec-be83-bcb5979be7bd/targets/test
infra-commit275225376
job-versionv1.16.5-beta.1.43+926ccb6ccb4e65
master_os_image
node_os_imagecos-77-12371-89-0
pod01dd26db-330e-11ea-a4dc-8213d962824c
revisionv1.16.5-beta.1.43+926ccb6ccb4e65

Test Failures


Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] 10m43s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sInitContainer\s\[NodeConformance\]\sshould\snot\sstart\sapp\scontainers\sif\sinit\scontainers\sfail\son\sa\sRestartAlways\spod\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 10 04:26:57.062: Couldn't delete ns: "init-container-225": namespace init-container-225 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace init-container-225 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently 10m23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sNodeLease\swhen\sthe\sNodeLease\sfeature\sis\senabled\sthe\skubelet\sshould\sreport\snode\sstatus\sinfrequently$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 10 06:19:38.203: Couldn't delete ns: "node-lease-test-8422": namespace node-lease-test-8422 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace node-lease-test-8422 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] 10m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\ssupport\sretrieving\slogs\sfrom\sthe\scontainer\sover\swebsockets\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 10 04:16:13.768: Couldn't delete ns: "pods-9528": namespace pods-9528 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace pods-9528 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance] 10m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\sallow\sprivilege\sescalation\swhen\strue\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 10 05:17:16.415: Couldn't delete ns: "security-context-test-9731": namespace security-context-test-9731 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace security-context-test-9731 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] 10m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sVariable\sExpansion\sshould\sallow\ssubstituting\svalues\sin\sa\scontainer\'s\scommand\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 10 05:27:20.059: Couldn't delete ns: "var-expansion-9227": namespace var-expansion-9227 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace var-expansion-9227 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade] 23m53s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-cloud\-provider\-gcp\]\sUpgrade\s\[Feature\:Upgrade\]\smaster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:MasterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:91
Jan  9 18:55:45.692: timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc0000dd950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/horizontal_pod_autoscalers.go:90
				
				Click to see stdout/stderrfrom junit_upgrade01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] 1m20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAggregator\sShould\sbe\sable\sto\ssupport\sthe\s1\.10\sSample\sAPI\sServer\susing\sthe\scurrent\sAggregator\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Jan  9 22:17:55.951: gave up waiting for apiservice wardle to come up successfully
Unexpected error:
    <*errors.errorString | 0xc00009f010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:392
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] 10m25s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sCustomResourcePublishOpenAPI\s\[Privileged\:ClusterAdmin\]\sremoves\sdefinition\sfrom\sspec\swhen\sone\sversion\sgets\schanged\sto\snot\sbe\sserved\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 10 04:06:10.290: Couldn't delete ns: "crd-publish-openapi-1013": namespace crd-publish-openapi-1013 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace crd-publish-openapi-1013 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] 10m22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sCustomResourcePublishOpenAPI\s\[Privileged\:ClusterAdmin\]\sworks\sfor\smultiple\sCRDs\sof\sdifferent\sgroups\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 10 05:47:51.697: Couldn't delete ns: "crd-publish-openapi-391": namespace crd-publish-openapi-391 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace crd-publish-openapi-391 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Deployment deployment should delete old replica sets [Conformance] 10m12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDeployment\sdeployment\sshould\sdelete\sold\sreplica\ssets\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 10 07:42:42.787: Couldn't delete ns: "deployment-2823": namespace deployment-2823 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace deployment-2823 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted 10m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sJob\sshould\srun\sa\sjob\sto\scompletion\swhen\stasks\ssometimes\sfail\sand\sare\snot\slocally\srestarted$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 10 04:57:09.821: Couldn't delete ns: "job-4282": namespace job-4282 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace job-4282 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] 12m48s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sNetwork\sPartition\s\[Disruptive\]\s\[Slow\]\s\[k8s\.io\]\s\[StatefulSet\]\sshould\scome\sback\sup\sif\snode\sgoes\sdown\s\[Slow\]\s\[Disruptive\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 10 02:04:17.582: Couldn't delete ns: "network-partition-1877": namespace network-partition-1877 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace network-partition-1877 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes 10m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sKubectl\screate\squota\sshould\screate\sa\squota\swith\sscopes$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 10 09:02:20.061: Couldn't delete ns: "kubectl-1823": namespace kubectl-1823 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace kubectl-1823 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] [Disruptive]NodeLease NodeLease deletion node lease should be deleted when corresponding node is deleted 12m42s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\s\[Disruptive\]NodeLease\sNodeLease\sdeletion\snode\slease\sshould\sbe\sdeleted\swhen\scorresponding\snode\sis\sdeleted$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/node_lease.go:66
Jan  9 20:25:09.244: Expected
    <*errors.errorString | 0xc002eb3e10>: {
        s: "2 / 17 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD              NODE                                         PHASE   GRACE CONDITIONS\nkube-proxy-dc4pm gke-bootstrap-e2e-default-pool-3db84e9d-8gxq Pending       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 20:16:14 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 20:16:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 20:16:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 20:16:14 +0000 UTC Reason: Message:}]\nkube-proxy-j6lmk gke-bootstrap-e2e-default-pool-3db84e9d-ldb5 Pending       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 19:13:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 19:13:08 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 19:13:08 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 19:13:07 +0000 UTC Reason: Message:}]\n",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/node_lease.go:98