This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 14 failed / 757 succeeded
Started2020-01-01 17:37
Elapsed15h6m
Revision
Buildergke-prow-ssd-pool-1a225945-k98q
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/dc103cdb-5ebd-4979-9ec7-e91d4a8b9ea8/targets/test'}}
pod3c059e63-2cbd-11ea-a07b-c6eb1bf16817
resultstorehttps://source.cloud.google.com/results/invocations/dc103cdb-5ebd-4979-9ec7-e91d4a8b9ea8/targets/test
infra-commit35a960fa0
job-versionv1.15.8-beta.1.12+7aa9cde6210ff8
master_os_image
node_os_imagecos-77-12371-89-0
pod3c059e63-2cbd-11ea-a07b-c6eb1bf16817
revisionv1.15.8-beta.1.12+7aa9cde6210ff8

Test Failures


Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] 13m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sNetwork\sPartition\s\[Disruptive\]\s\[Slow\]\s\[k8s\.io\]\s\[StatefulSet\]\sshould\scome\sback\sup\sif\snode\sgoes\sdown\s\[Slow\]\s\[Disruptive\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 07:55:33.617: Couldn't delete ns: "network-partition-1151": namespace network-partition-1151 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace network-partition-1151 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-auth] PodSecurityPolicy should allow pods under the privileged policy.PodSecurityPolicy 31s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-auth\]\sPodSecurityPolicy\sshould\sallow\spods\sunder\sthe\sprivileged\spolicy\.PodSecurityPolicy$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:101
PSP annotation not found
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:118
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-auth] PodSecurityPolicy should enforce the restricted policy.PodSecurityPolicy 15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-auth\]\sPodSecurityPolicy\sshould\senforce\sthe\srestricted\spolicy\.PodSecurityPolicy$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:85
should be forbidden
Expected an error to have occurred.  Got:
    <nil>: nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2059
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-auth] PodSecurityPolicy should forbid pod creation when no PSP is available 29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-auth\]\sPodSecurityPolicy\sshould\sforbid\spod\screation\swhen\sno\sPSP\sis\savailable$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:79
should be forbidden
Expected an error to have occurred.  Got:
    <nil>: nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2059
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] ReplicationController light Should scale from 1 pod to 2 pods 15m58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\sReplicationController\slight\sShould\sscale\sfrom\s1\spod\sto\s2\spods$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:70
timeout waiting 15m0s for 2 replicas
Unexpected error:
    <*errors.errorString | 0xc00027f8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 15m58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sReplicaSet\sShould\sscale\sfrom\s1\spod\sto\s3\spods\sand\sfrom\s3\sto\s5$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:50
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc00027f8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability 15m47s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sReplicationController\sShould\sscale\sfrom\s1\spod\sto\s3\spods\sand\sfrom\s3\sto\s5\sand\sverify\sdecision\sstability$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:61
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc00027f8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:124
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade] 28m35s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sUpgrade\s\[Feature\:Upgrade\]\smaster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:MasterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:91
Jan  1 18:10:49.904: timeout waiting 15m0s for 1 replicas
Unexpected error:
    <*errors.errorString | 0xc0000d7000>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/horizontal_pod_autoscalers.go:85