This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 22 failed / 721 succeeded
Started2019-12-22 18:24
Elapsed15h6m
Revision
Buildergke-prow-ssd-pool-1a225945-fp0z
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/0bbc0521-fed5-4208-82b1-769d2473b6f6/targets/test'}}
pod3b8cf7bb-24e8-11ea-9a71-1235d8f27bba
resultstorehttps://source.cloud.google.com/results/invocations/0bbc0521-fed5-4208-82b1-769d2473b6f6/targets/test
infra-commit7d2dc9beb
job-versionv1.15.8-beta.1.4+c53e0770eab343
master_os_image
node_os_imagecos-77-12371-89-0
pod3b8cf7bb-24e8-11ea-9a71-1235d8f27bba
revisionv1.15.8-beta.1.4+c53e0770eab343

Test Failures


Kubernetes e2e suite [k8s.io] EquivalenceCache [Serial] validates pod affinity works properly when new replica pod is scheduled 6m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sEquivalenceCache\s\[Serial\]\svalidates\spod\saffinity\sworks\sproperly\swhen\snew\sreplica\spod\sis\sscheduled$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/equivalence_cache_predicates.go:52
Unexpected error:
    <*errors.errorString | 0xc005905210>: {
        s: "1 / 22 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                       NODE                                         PHASE   GRACE CONDITIONS\nstackdriver-metadata-agent-cluster-level-76bffb776b-4n8n8 gke-bootstrap-e2e-default-pool-4bfefbc6-40xz Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 22:02:38 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 22:03:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 22:03:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 22:02:38 +0000 UTC Reason: Message:}]\n",
    }
    1 / 22 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                       NODE                                         PHASE   GRACE CONDITIONS
    stackdriver-metadata-agent-cluster-level-76bffb776b-4n8n8 gke-bootstrap-e2e-default-pool-4bfefbc6-40xz Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 22:02:38 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 22:03:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 22:03:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 22:02:38 +0000 UTC Reason: Message:}]
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/equivalence_cache_predicates.go:74
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] EquivalenceCache [Serial] validates pod anti-affinity works properly when new replica pod is scheduled 6m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sEquivalenceCache\s\[Serial\]\svalidates\spod\santi\-affinity\sworks\sproperly\swhen\snew\sreplica\spod\sis\sscheduled$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/equivalence_cache_predicates.go:52
Unexpected error:
    <*errors.errorString | 0xc0088a9bc0>: {
        s: "1 / 22 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                       NODE                                         PHASE   GRACE CONDITIONS\nstackdriver-metadata-agent-cluster-level-76bffb776b-vsmtx gke-bootstrap-e2e-default-pool-4bfefbc6-qwcv Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 03:51:03 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 06:51:22 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 06:51:22 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 03:51:03 +0000 UTC Reason: Message:}]\n",
    }
    1 / 22 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                       NODE                                         PHASE   GRACE CONDITIONS
    stackdriver-metadata-agent-cluster-level-76bffb776b-vsmtx gke-bootstrap-e2e-default-pool-4bfefbc6-qwcv Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 03:51:03 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 06:51:22 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 06:51:22 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 03:51:03 +0000 UTC Reason: Message:}]
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/equivalence_cache_predicates.go:74
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-auth] PodSecurityPolicy should allow pods under the privileged policy.PodSecurityPolicy 24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-auth\]\sPodSecurityPolicy\sshould\sallow\spods\sunder\sthe\sprivileged\spolicy\.PodSecurityPolicy$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:101
PSP annotation not found
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:118
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-auth] PodSecurityPolicy should enforce the restricted policy.PodSecurityPolicy 15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-auth\]\sPodSecurityPolicy\sshould\senforce\sthe\srestricted\spolicy\.PodSecurityPolicy$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:85
should be forbidden
Expected an error to have occurred.  Got:
    <nil>: nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2059
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-auth] PodSecurityPolicy should forbid pod creation when no PSP is available 29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-auth\]\sPodSecurityPolicy\sshould\sforbid\spod\screation\swhen\sno\sPSP\sis\savailable$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:79
should be forbidden
Expected an error to have occurred.  Got:
    <nil>: nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2059
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] ReplicationController light Should scale from 1 pod to 2 pods 16m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\sReplicationController\slight\sShould\sscale\sfrom\s1\spod\sto\s2\spods$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:70
timeout waiting 15m0s for 2 replicas
Unexpected error:
    <*errors.errorString | 0xc00029f8b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 15m50s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sDeployment\sShould\sscale\sfrom\s1\spod\sto\s3\spods\sand\sfrom\s3\sto\s5$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:40
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc00029f8b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 15m58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sReplicaSet\sShould\sscale\sfrom\s1\spod\sto\s3\spods\sand\sfrom\s3\sto\s5$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:50
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc00029f8b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 15m58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sReplicaSet\sShould\sscale\sfrom\s5\spods\sto\s3\spods\sand\sfrom\s3\sto\s1$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:53
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc00029f8b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability 15m53s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sReplicationController\sShould\sscale\sfrom\s5\spods\sto\s3\spods\sand\sfrom\s3\sto\s1\sand\sverify\sdecision\sstability$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:64
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc00029f8b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Nodes [Disruptive] Resize [Slow] should be able to add nodes 12m17s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sNodes\s\[Disruptive\]\sResize\s\[Slow\]\sshould\sbe\sable\sto\sadd\snodes$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/resize_nodes.go:74
Unexpected error:
    <*errors.errorString | 0xc002bc7d70>: {
        s: "1 / 22 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                       NODE                                         PHASE   GRACE CONDITIONS\nstackdriver-metadata-agent-cluster-level-76bffb776b-4n8n8 gke-bootstrap-e2e-default-pool-4bfefbc6-40xz Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 22:02:38 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 22:03:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 22:03:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 22:02:38 +0000 UTC Reason: Message:}]\n",
    }
    1 / 22 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                       NODE                                         PHASE   GRACE CONDITIONS
    stackdriver-metadata-agent-cluster-level-76bffb776b-4n8n8 gke-bootstrap-e2e-default-pool-4bfefbc6-40xz Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 22:02:38 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 22:03:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 22:03:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 22:02:38 +0000 UTC Reason: Message:}]
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/resize_nodes.go:106