This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 15 failed / 15 succeeded
Started2020-10-22 14:55
Elapsed5h8m
Revision
Builder9c369aa7-1476-11eb-8a3c-66d0ef5d093b
infra-commitce881d4c5
job-versionv1.20.0-alpha.3.42+ededd08ba131b7
master_os_imagecos-85-13310-1041-9
node_os_imagecos-85-13310-1041-9
revisionv1.20.0-alpha.3.42+ededd08ba131b7

Test Failures


Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp] 18m14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sShouldn\'t\sperform\sscale\sup\soperation\sand\sshould\slist\sunhealthy\sstatus\sif\smost\sof\sthe\scluster\sis\sbroken\[Feature\:ClusterSizeAutoscalingScaleUp\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:881
Oct 22 19:14:20.520: Unexpected error:
    <*errors.StatusError | 0xc00130e000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "configmaps \"cluster-autoscaler-status\" not found",
            Reason: "NotFound",
            Details: {
                Name: "cluster-autoscaler-status",
                Group: "",
                Kind: "configmaps",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    configmaps "cluster-autoscaler-status" not found
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:930
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp] 38m39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\sadd\snode\sto\sthe\sparticular\smig\s\[Feature\:ClusterSizeAutoscalingScaleUp\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:538
Oct 22 15:50:03.278: Unexpected error:
    <*errors.errorString | 0xc0036444b0>: {
        s: "timeout waiting 5m0s for appropriate cluster size",
    }
    timeout waiting 5m0s for appropriate cluster size
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:583
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown] 24m39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\sbe\sable\sto\sscale\sdown\sby\sdraining\smultiple\spods\sone\sby\sone\sas\sdictated\sby\spdb\[Feature\:ClusterSizeAutoscalingScaleDown\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:733
Oct 22 17:15:20.600: Unexpected error:
    <*errors.errorString | 0xc003320bb0>: {
        s: "timeout waiting 20m0s for appropriate cluster size",
    }
    timeout waiting 20m0s for appropriate cluster size
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:736
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown] 25m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\sbe\sable\sto\sscale\sdown\swhen\srescheduling\sa\spod\sis\srequired\sand\spdb\sallows\sfor\sit\[Feature\:ClusterSizeAutoscalingScaleDown\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:715
Oct 22 19:48:09.616: Unexpected error:
    <*errors.errorString | 0xc001c29030>: {
        s: "timeout waiting 20m0s for appropriate cluster size",
    }
    timeout waiting 20m0s for appropriate cluster size
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:718
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown] 24m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\scorrectly\sscale\sdown\safter\sa\snode\sis\snot\sneeded\s\[Feature\:ClusterSizeAutoscalingScaleDown\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:684
Oct 22 18:06:27.333: Unexpected error:
    <*errors.errorString | 0xc003387060>: {
        s: "timeout waiting 20m0s for appropriate cluster size",
    }
    timeout waiting 20m0s for appropriate cluster size
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:680
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] 24m40s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\scorrectly\sscale\sdown\safter\sa\snode\sis\snot\sneeded\sand\sone\snode\sis\sbroken\s\[Feature\:ClusterSizeAutoscalingScaleDown\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:687
Oct 22 17:42:20.237: Unexpected error:
    <*errors.errorString | 0xc002d7ec80>: {
        s: "timeout waiting 20m0s for appropriate cluster size",
    }
    timeout waiting 20m0s for appropriate cluster size
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:680
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp] 6m38s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\sincrease\scluster\ssize\sif\spod\srequesting\sEmptyDir\svolume\sis\spending\s\[Feature\:ClusterSizeAutoscalingScaleUp\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:445
Oct 22 16:25:35.612: Unexpected error:
    <*errors.errorString | 0xc002a8c0f0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:459
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp] 11m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\sincrease\scluster\ssize\sif\spod\srequesting\svolume\sis\spending\s\[Feature\:ClusterSizeAutoscalingScaleUp\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:466
Oct 22 18:14:16.240: Unexpected error:
    <*errors.errorString | 0xc00372e410>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:528
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp] 8m31s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\sincrease\scluster\ssize\sif\spods\sare\spending\sdue\sto\shost\sport\sconflict\s\[Feature\:ClusterSizeAutoscalingScaleUp\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:417
Oct 22 19:25:30.955: Unexpected error:
    <*errors.errorString | 0xc0032e0000>: {
        s: "timeout waiting 5m0s for appropriate cluster size",
    }
    timeout waiting 5m0s for appropriate cluster size
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:421
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] 24m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\sscale\sdown\swhen\sexpendable\spod\sis\srunning\s\[Feature\:ClusterSizeAutoscalingScaleDown\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:973
Oct 22 16:50:25.949: Unexpected error:
    <*errors.errorString | 0xc002ffc2b0>: {
        s: "timeout waiting 20m0s for appropriate cluster size",
    }
    timeout waiting 20m0s for appropriate cluster size
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:980
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp] 5m17s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\sscale\sup\swhen\snon\sexpendable\spod\sis\screated\s\[Feature\:ClusterSizeAutoscalingScaleUp\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:951
Oct 22 16:20:07.740: Unexpected error:
    <*errors.errorString | 0xc00338e550>: {
        s: "Only 3 pods started out of 4",
    }
    Only 3 pods started out of 4
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:1316
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp] 8m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshouldn\'t\sincrease\scluster\ssize\sif\spending\spod\sis\stoo\slarge\s\[Feature\:ClusterSizeAutoscalingScaleUp\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:166
Oct 22 18:34:03.362: Expected
    <bool>: false
to equal
    <bool>: true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:188