This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 3 failed / 32 succeeded
Started2020-01-19 04:52
Elapsed4h30m
Revision
Buildergke-prow-default-pool-cf4891d4-s69b
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/f915c2b4-a802-42ce-bb3f-a5c88537d657/targets/test'}}
pod6e61f4c6-3a77-11ea-b4a5-fe173d511a50
resultstorehttps://source.cloud.google.com/results/invocations/f915c2b4-a802-42ce-bb3f-a5c88537d657/targets/test
infra-commitace1aead6
job-versionv1.18.0-alpha.1.933+f4b6b751cdf2d4
master_os_imagecos-77-12371-89-0
node_os_imagecos-77-12371-89-0
pod6e61f4c6-3a77-11ea-b4a5-fe173d511a50
revisionv1.18.0-alpha.1.933+f4b6b751cdf2d4

Test Failures


Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] 4m53s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\sbe\sable\sto\sscale\sdown\sby\sdraining\ssystem\spods\swith\spdb\[Feature\:ClusterSizeAutoscalingScaleDown\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:745
Jan 19 05:53:26.979: Unexpected error:
    <*errors.errorString | 0xc0020ba960>: {
        s: "failed to coerce RC into spawning a pod on node ca-minion-group-3kkr within timeout",
    }
    failed to coerce RC into spawning a pod on node ca-minion-group-3kkr within timeout
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:1022
				
				Click to see stdout/stderrfrom junit_01.xml

Find on mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp] 7m41s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshouldn\'t\strigger\sadditional\sscale\-ups\sduring\sprocessing\sscale\-up\s\[Feature\:ClusterSizeAutoscalingScaleUp\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:334
Jan 19 06:00:47.241: Unexpected error:
    <*errors.errorString | 0xc002034110>: {
        s: "Too many pods are still not running: [memory-reservation-5vb6g memory-reservation-k5pwb memory-reservation-k8t48 memory-reservation-pmlbx memory-reservation-qjnll memory-reservation-t6dd8 memory-reservation-whlwk]",
    }
    Too many pods are still not running: [memory-reservation-5vb6g memory-reservation-k5pwb memory-reservation-k8t48 memory-reservation-pmlbx memory-reservation-qjnll memory-reservation-t6dd8 memory-reservation-whlwk]
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:352