This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 21 failed / 723 succeeded
Started2019-12-28 07:41
Elapsed15h6m
Revision
Buildergke-prow-ssd-pool-1a225945-v2fg
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/24485509-87b9-44ae-9745-02a1b1f4818e/targets/test'}}
pod6f4a6d8c-2945-11ea-a07b-c6eb1bf16817
resultstorehttps://source.cloud.google.com/results/invocations/24485509-87b9-44ae-9745-02a1b1f4818e/targets/test
infra-commitde6f398b1
job-versionv1.15.8-beta.1.12+7aa9cde6210ff8
master_os_image
node_os_imagecos-77-12371-89-0
pod6f4a6d8c-2945-11ea-a07b-c6eb1bf16817
revisionv1.15.8-beta.1.12+7aa9cde6210ff8

Test Failures


Kubernetes e2e suite [k8s.io] EquivalenceCache [Serial] validates pod anti-affinity works properly when new replica pod is scheduled 6m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sEquivalenceCache\s\[Serial\]\svalidates\spod\santi\-affinity\sworks\sproperly\swhen\snew\sreplica\spod\sis\sscheduled$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/equivalence_cache_predicates.go:52
Unexpected error:
    <*errors.errorString | 0xc0059dc2b0>: {
        s: "5 / 24 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                    NODE                                         PHASE   GRACE CONDITIONS\nevent-exporter-v0.3.0-6b549c49dd-4px4h gke-bootstrap-e2e-default-pool-608294e3-b0h6 Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 07:48:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [event-exporter prometheus-to-sd-exporter]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [event-exporter prometheus-to-sd-exporter]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 07:48:58 +0000 UTC Reason: Message:}]\nfluentd-gcp-scaler-dd489f778-f4vkc     gke-bootstrap-e2e-default-pool-608294e3-b0h6 Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 15:29:59 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [fluentd-gcp-scaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [fluentd-gcp-scaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 15:29:59 +0000 UTC Reason: Message:}]\nheapster-76c4845f55-fwslk              gke-bootstrap-e2e-default-pool-608294e3-b0h6 Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 15:29:59 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [heapster prom-to-sd heapster-nanny]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [heapster prom-to-sd heapster-nanny]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 15:29:59 +0000 UTC Reason: Message:}]\nkube-dns-5c44c7b6b6-ftfrz              gke-bootstrap-e2e-default-pool-608294e3-b0h6 Running 30s   [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 07:59:34 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kubedns dnsmasq sidecar prometheus-to-sd]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kubedns dnsmasq sidecar prometheus-to-sd]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 07:59:34 +0000 UTC Reason: Message:}]\nkube-dns-autoscaler-66f9477b68-mmrmf   gke-bootstrap-e2e-default-pool-608294e3-b0h6 Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 07:59:18 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 07:59:18 +0000 UTC Reason: Message:}]\n",
    }
    5 / 24 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                    NODE                                         PHASE   GRACE CONDITIONS
    event-exporter-v0.3.0-6b549c49dd-4px4h gke-bootstrap-e2e-default-pool-608294e3-b0h6 Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 07:48:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [event-exporter prometheus-to-sd-exporter]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [event-exporter prometheus-to-sd-exporter]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 07:48:58 +0000 UTC Reason: Message:}]
    fluentd-gcp-scaler-dd489f778-f4vkc     gke-bootstrap-e2e-default-pool-608294e3-b0h6 Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 15:29:59 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [fluentd-gcp-scaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [fluentd-gcp-scaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 15:29:59 +0000 UTC Reason: Message:}]
    heapster-76c4845f55-fwslk              gke-bootstrap-e2e-default-pool-608294e3-b0h6 Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 15:29:59 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [heapster prom-to-sd heapster-nanny]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [heapster prom-to-sd heapster-nanny]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 15:29:59 +0000 UTC Reason: Message:}]
    kube-dns-5c44c7b6b6-ftfrz              gke-bootstrap-e2e-default-pool-608294e3-b0h6 Running 30s   [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 07:59:34 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kubedns dnsmasq sidecar prometheus-to-sd]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kubedns dnsmasq sidecar prometheus-to-sd]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 07:59:34 +0000 UTC Reason: Message:}]
    kube-dns-autoscaler-66f9477b68-mmrmf   gke-bootstrap-e2e-default-pool-608294e3-b0h6 Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 07:59:18 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 20:49:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 07:59:18 +0000 UTC Reason: Message:}]
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/equivalence_cache_predicates.go:74
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent 14m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sServers\swith\ssupport\sfor\sAPI\schunking\sshould\ssupport\scontinue\slisting\sfrom\sthe\slast\skey\sif\sthe\soriginal\sversion\shas\sbeen\scompacted\saway\,\sthough\sthe\slist\sis\sinconsistent$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 28 21:30:58.558: Couldn't delete ns: "chunking-76": namespace chunking-76 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace chunking-76 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-auth] PodSecurityPolicy should allow pods under the privileged policy.PodSecurityPolicy 15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-auth\]\sPodSecurityPolicy\sshould\sallow\spods\sunder\sthe\sprivileged\spolicy\.PodSecurityPolicy$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:101
PSP annotation not found
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:118
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-auth] PodSecurityPolicy should enforce the restricted policy.PodSecurityPolicy 18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-auth\]\sPodSecurityPolicy\sshould\senforce\sthe\srestricted\spolicy\.PodSecurityPolicy$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:85
should be forbidden
Expected an error to have occurred.  Got:
    <nil>: nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2059
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-auth] PodSecurityPolicy should forbid pod creation when no PSP is available 13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-auth\]\sPodSecurityPolicy\sshould\sforbid\spod\screation\swhen\sno\sPSP\sis\savailable$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:79
should be forbidden
Expected an error to have occurred.  Got:
    <nil>: nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2059
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 16m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sDeployment\sShould\sscale\sfrom\s1\spod\sto\s3\spods\sand\sfrom\s3\sto\s5$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:40
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc0002af8a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 16m6s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sDeployment\sShould\sscale\sfrom\s5\spods\sto\s3\spods\sand\sfrom\s3\sto\s1$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:43
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc0002af8a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 15m52s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sReplicaSet\sShould\sscale\sfrom\s1\spod\sto\s3\spods\sand\sfrom\s3\sto\s5$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:50
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc0002af8a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124