This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 19 failed / 772 succeeded
Started2019-12-25 03:51
Elapsed15h6m
Revision
Buildergke-prow-ssd-pool-1a225945-lf79
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/059fca4c-3fc8-40ae-8f4e-28c3efba7170/targets/test'}}
poda75fa89e-26c9-11ea-a07b-c6eb1bf16817
resultstorehttps://source.cloud.google.com/results/invocations/059fca4c-3fc8-40ae-8f4e-28c3efba7170/targets/test
infra-commitcbaf5970c
job-versionv1.15.8-beta.1.4+c53e0770eab343
master_os_image
node_os_imagecos-77-12371-89-0
poda75fa89e-26c9-11ea-a07b-c6eb1bf16817
revisionv1.15.8-beta.1.4+c53e0770eab343

Test Failures


Kubernetes e2e suite AfterSuite 0.00s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\sAfterSuite$'
_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:164
Expected success, but got an error:
    <*errors.StatusError | 0xc0042f28c0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "Operation cannot be fulfilled on namespaces \"nslifetest-1-7433\": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.",
            Reason: "Conflict",
            Details: {
                Name: "nslifetest-1-7433",
                Group: "",
                Kind: "namespaces",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 409,
        },
    }
    Operation cannot be fulfilled on namespaces "nslifetest-1-7433": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:791
				from junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] EquivalenceCache [Serial] validates pod affinity works properly when new replica pod is scheduled 6m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sEquivalenceCache\s\[Serial\]\svalidates\spod\saffinity\sworks\sproperly\swhen\snew\sreplica\spod\sis\sscheduled$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/equivalence_cache_predicates.go:52
Unexpected error:
    <*errors.errorString | 0xc0067f1e60>: {
        s: "1 / 22 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                       NODE                                         PHASE   GRACE CONDITIONS\nstackdriver-metadata-agent-cluster-level-76bffb776b-26b8q gke-bootstrap-e2e-default-pool-3b523047-tjf4 Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 11:30:26 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 14:30:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 14:30:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 11:30:26 +0000 UTC Reason: Message:}]\n",
    }
    1 / 22 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                       NODE                                         PHASE   GRACE CONDITIONS
    stackdriver-metadata-agent-cluster-level-76bffb776b-26b8q gke-bootstrap-e2e-default-pool-3b523047-tjf4 Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 11:30:26 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 14:30:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 14:30:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 11:30:26 +0000 UTC Reason: Message:}]
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/equivalence_cache_predicates.go:74
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-auth] PodSecurityPolicy should allow pods under the privileged policy.PodSecurityPolicy 31s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-auth\]\sPodSecurityPolicy\sshould\sallow\spods\sunder\sthe\sprivileged\spolicy\.PodSecurityPolicy$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:101
PSP annotation not found
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:118
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-auth] PodSecurityPolicy should enforce the restricted policy.PodSecurityPolicy 27s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-auth\]\sPodSecurityPolicy\sshould\senforce\sthe\srestricted\spolicy\.PodSecurityPolicy$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:85
should be forbidden
Expected an error to have occurred.  Got:
    <nil>: nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2059
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-auth] PodSecurityPolicy should forbid pod creation when no PSP is available 29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-auth\]\sPodSecurityPolicy\sshould\sforbid\spod\screation\swhen\sno\sPSP\sis\savailable$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:79
should be forbidden
Expected an error to have occurred.  Got:
    <nil>: nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2059
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] ReplicationController light Should scale from 2 pods to 1 pod 15m54s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\sReplicationController\slight\sShould\sscale\sfrom\s2\spods\sto\s1\spod$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:82
timeout waiting 15m0s for 1 replicas
Unexpected error:
    <*errors.errorString | 0xc0002b18b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 16m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sDeployment\sShould\sscale\sfrom\s1\spod\sto\s3\spods\sand\sfrom\s3\sto\s5$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:40
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc0002b18b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 16m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sDeployment\sShould\sscale\sfrom\s5\spods\sto\s3\spods\sand\sfrom\s3\sto\s1$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:43
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc0002b18b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 15m55s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sReplicaSet\sShould\sscale\sfrom\s1\spod\sto\s3\spods\sand\sfrom\s3\sto\s5$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:50
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc0002b18b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 15m59s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sReplicaSet\sShould\sscale\sfrom\s5\spods\sto\s3\spods\sand\sfrom\s3\sto\s1$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:53
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc0002b18b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Nodes [Disruptive] Resize [Slow] should be able to delete nodes 13m52s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sNodes\s\[Disruptive\]\sResize\s\[Slow\]\sshould\sbe\sable\sto\sdelete\snodes$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/resize_nodes.go:74
Unexpected error:
    <*errors.errorString | 0xc002049b30>: {
        s: "1 / 22 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                       NODE                                         PHASE   GRACE CONDITIONS\nstackdriver-metadata-agent-cluster-level-76bffb776b-26b8q gke-bootstrap-e2e-default-pool-3b523047-tjf4 Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 11:30:26 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 11:36:17 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 11:36:17 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 11:30:26 +0000 UTC Reason: Message:}]\n",
    }
    1 / 22 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                       NODE                                         PHASE   GRACE CONDITIONS
    stackdriver-metadata-agent-cluster-level-76bffb776b-26b8q gke-bootstrap-e2e-default-pool-3b523047-tjf4 Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 11:30:26 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 11:36:17 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 11:36:17 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [metadata-agent]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 11:30:26 +0000 UTC Reason: Message:}]
    
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/resize_nodes.go:106
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade] 29m49s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sUpgrade\s\[Feature\:Upgrade\]\smaster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:MasterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:91
Dec 25 04:24:21.490: timeout waiting 15m0s for 1 replicas
Unexpected error:
    <*errors.errorString | 0xc00009f010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/horizontal_pod_autoscalers.go:85