This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 48 failed / 276 succeeded
Started2020-01-09 01:15
Elapsed15h7m
Revision
Buildergke-prow-default-pool-cf4891d4-0mh6
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/28e31d70-1f9f-4641-b440-95b80320e087/targets/test'}}
pod58ed3bf1-327d-11ea-9709-02f27a93e62e
resultstorehttps://source.cloud.google.com/results/invocations/28e31d70-1f9f-4641-b440-95b80320e087/targets/test
infra-commitfa175938f
job-versionv1.15.8-beta.1.18+9f3cd90378bd97
master_os_image
node_os_imagecos-77-12371-89-0
pod58ed3bf1-327d-11ea-9709-02f27a93e62e
revisionv1.15.8-beta.1.18+9f3cd90378bd97

Test Failures


Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] 10m14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\spoststart\shttp\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 09:19:47.379: Couldn't delete ns: "container-lifecycle-hook-34": namespace container-lifecycle-hook-34 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace container-lifecycle-hook-34 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] 10m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\sget\sa\shost\sIP\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 11:55:03.766: Couldn't delete ns: "pods-3191": namespace pods-3191 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace pods-3191 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. 10m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sResourceQuota\sshould\screate\sa\sResourceQuota\sand\scapture\sthe\slife\sof\sa\spod\.$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 09:41:47.391: Couldn't delete ns: "resourcequota-1580": namespace resourcequota-1580 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace resourcequota-1580 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] 10m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sWatchers\sshould\sbe\sable\sto\sstart\swatching\sfrom\sa\sspecific\sresource\sversion\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 07:52:38.978: Couldn't delete ns: "watch-5115": namespace watch-5115 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace watch-5115 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] DisruptionController should update PodDisruptionBudget status 10m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDisruptionController\sshould\supdate\sPodDisruptionBudget\sstatus$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:56:38.840: Couldn't delete ns: "disruption-5514": namespace disruption-5514 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace disruption-5514 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout 12m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sNetwork\sPartition\s\[Disruptive\]\s\[Slow\]\s\[k8s\.io\]\sPods\sshould\sreturn\sto\srunning\sand\sready\sstate\safter\snetwork\spartition\sis\shealed\sAll\spods\son\sthe\sunreachable\snode\sshould\sbe\smarked\sas\sNotReady\supon\sthe\snode\sturn\sNotReady\sAND\sall\spods\sshould\sbe\smark\sback\sto\sReady\swhen\sthe\snode\sget\sback\sto\sReady\sbefore\spod\seviction\stimeout$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/network_partition.go:137
Jan  9 11:34:59.075: Pods on node gke-bootstrap-e2e-default-pool-f57941e1-fspp are not ready and running within 2m0s: gave up waiting for matching pods to be 'Running and Ready' after 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/network_partition.go:161
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] 10m6s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sReplicationController\sshould\sadopt\smatching\spods\son\screation\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 10:38:21.305: Couldn't delete ns: "replication-controller-3982": namespace replication-controller-3982 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace replication-controller-3982 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-auth] ServiceAccounts should ensure a single API token exists 10m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-auth\]\sServiceAccounts\sshould\sensure\sa\ssingle\sAPI\stoken\sexists$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 08:54:27.540: Couldn't delete ns: "svcaccounts-9895": namespace svcaccounts-9895 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace svcaccounts-9895 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] ReplicationController light Should scale from 1 pod to 2 pods 15m47s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\sReplicationController\slight\sShould\sscale\sfrom\s1\spod\sto\s2\spods$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:70
timeout waiting 15m0s for 2 replicas
Unexpected error:
    <*errors.errorString | 0xc00028d8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 25m59s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sDeployment\sShould\sscale\sfrom\s5\spods\sto\s3\spods\sand\sfrom\s3\sto\s1$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:43
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc00028d8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover 17m41s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sRestart\s\[Disruptive\]\sshould\srestart\sall\snodes\sand\sensure\sall\snodes\sand\spods\srecover$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:86
Jan  9 07:11:46.745: At least one pod wasn't running and ready after the restart.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/restart.go:115
				
				Click to see stdout/stderrfrom junit_01.xml

Find wasnt mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade] 28m49s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sUpgrade\s\[Feature\:Upgrade\]\smaster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:MasterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:91
Jan  9 01:47:23.158: timeout waiting 15m0s for 1 replicas
Unexpected error:
    <*errors.errorString | 0xc0000dd000>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/horizontal_pod_autoscalers.go:85