go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sShould\sbe\sable\sto\sscale\sa\snode\sgroup\sdown\sto\s0\[Feature\:ClusterSizeAutoscalingScaleDown\]$'
test/e2e/autoscaling/cluster_size_autoscaling.go:874 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:874 +0x490 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 +0x95from junit_01.xml
[BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 12/01/22 06:10:01.444 Dec 1 06:10:01.444: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename autoscaling 12/01/22 06:10:01.446 STEP: Waiting for a default service account to be provisioned in namespace 12/01/22 06:10:01.581 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 12/01/22 06:10:01.665 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:103 STEP: Initial size of ca-minion-group-1: 0 12/01/22 06:10:05.473 STEP: Initial size of ca-minion-group: 2 12/01/22 06:10:09.12 Dec 1 06:10:09.165: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Initial number of schedulable nodes: 2 12/01/22 06:10:09.209 [It] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] test/e2e/autoscaling/cluster_size_autoscaling.go:877 STEP: Find smallest node group and manually scale it to a single node 12/01/22 06:10:09.209 Dec 1 06:10:09.209: INFO: Skipping dumping logs from cluster Dec 1 06:10:14.049: INFO: Skipping dumping logs from cluster Dec 1 06:10:14.093: INFO: Waiting for ready nodes 3, current ready 2, not ready nodes 0 Dec 1 06:10:34.141: INFO: Waiting for ready nodes 3, current ready 2, not ready nodes 0 Dec 1 06:10:54.188: INFO: Waiting for ready nodes 3, current ready 2, not ready nodes 0 Dec 1 06:11:14.240: INFO: Waiting for ready nodes 3, current ready 2, not ready nodes 0 Dec 1 06:11:34.287: INFO: Cluster has reached the desired number of ready nodes 3 STEP: Target node for scale-down: ca-minion-group-1-flzm 12/01/22 06:11:37.81 STEP: Make the single node unschedulable 12/01/22 06:11:37.81 STEP: Taint node ca-minion-group-1-flzm 12/01/22 06:11:37.81 STEP: Manually drain the single node 12/01/22 06:11:37.91 I1201 06:11:38.189376 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 I1201 06:11:58.241710 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 I1201 06:12:18.286905 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 I1201 06:12:38.333113 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 I1201 06:12:58.380038 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 I1201 06:13:18.425595 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 I1201 06:13:38.471340 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 I1201 06:13:58.518060 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 I1201 06:14:18.563858 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 I1201 06:14:38.610054 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 I1201 06:14:58.656295 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m7.765s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 5m0s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 3m31.299s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 5635 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034721a0}, 0xc00044de88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d01b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:15:18.700874 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m27.766s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 5m20.001s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 3m51.3s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 5635 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034721a0}, 0xc00044de88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d01b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:15:38.745792 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m47.767s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 5m40.002s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 4m11.301s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 5635 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034721a0}, 0xc00044de88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d01b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:15:58.792161 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m7.768s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 6m0.004s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 4m31.303s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 5635 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034721a0}, 0xc00044de88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d01b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:16:18.839180 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m27.769s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 6m20.005s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 4m51.303s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 5635 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034721a0}, 0xc00044de88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d01b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:16:38.884219 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m47.771s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 6m40.006s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 5m11.305s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 5635 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034721a0}, 0xc00044de88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d01b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:16:58.929408 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m7.773s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 7m0.009s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 5m31.307s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 5635 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034721a0}, 0xc00044de88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d01b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:17:18.975416 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m27.774s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 7m20.009s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 5m51.308s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 5635 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034721a0}, 0xc00044de88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d01b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:17:39.021293 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m47.775s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 7m40.011s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 6m11.309s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 5635 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034721a0}, 0xc00044de88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d01b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:17:59.066099 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m7.777s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 8m0.012s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 6m31.311s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 5635 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034721a0}, 0xc00044de88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d01b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Dec 1 06:18:19.113: INFO: Condition Ready of node ca-minion-group-m9zb is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669872061 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669875440 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:18:01 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:18:06 +0000 UTC}]. Failure I1201 06:18:19.113149 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m27.78s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 8m20.015s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 6m51.314s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 5635 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034721a0}, 0xc00044de88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d01b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Dec 1 06:18:39.158: INFO: Condition Ready of node ca-minion-group-m9zb is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669872061 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669875440 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:18:01 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:18:06 +0000 UTC}]. Failure I1201 06:18:39.158804 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m47.781s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 8m40.016s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 7m11.315s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 5635 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034721a0}, 0xc00044de88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d01b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Dec 1 06:18:59.205: INFO: Condition Ready of node ca-minion-group-m9zb is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669872061 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669875440 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:18:01 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:18:06 +0000 UTC}]. Failure I1201 06:18:59.205812 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m7.784s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 9m0.019s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 7m31.318s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 5635 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034721a0}, 0xc00044de88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d01b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Dec 1 06:19:19.250: INFO: Condition Ready of node ca-minion-group-m9zb is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669872061 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669875440 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:18:01 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:18:06 +0000 UTC}]. Failure I1201 06:19:19.250223 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m27.786s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 9m20.021s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 7m51.32s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 5635 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034721a0}, 0xc00044de88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d01b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:19:39.297252 7918 cluster_size_autoscaling.go:1381] Cluster has reached the desired size Dec 1 06:19:43.088: FAIL: Expected <int>: 1 to equal <int>: 0 Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:874 +0x490 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 +0x95 [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/node/init/init.go:32 Dec 1 06:19:43.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:139 STEP: Restoring initial size of the cluster 12/01/22 06:19:43.132 STEP: Setting size of ca-minion-group-1 to 0 12/01/22 06:19:46.723 Dec 1 06:19:46.723: INFO: Skipping dumping logs from cluster Dec 1 06:19:51.341: INFO: Skipping dumping logs from cluster STEP: Setting size of ca-minion-group to 2 12/01/22 06:19:55 Dec 1 06:19:55.000: INFO: Skipping dumping logs from cluster Dec 1 06:19:59.433: INFO: Skipping dumping logs from cluster Dec 1 06:19:59.477: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Remove taint from node ca-master 12/01/22 06:19:59.52 STEP: Remove taint from node ca-minion-group-1-flzm 12/01/22 06:19:59.562 STEP: Remove taint from node ca-minion-group-vlq2 12/01/22 06:19:59.659 I1201 06:19:59.714717 7918 cluster_size_autoscaling.go:165] Made nodes schedulable again in 193.90559ms [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 12/01/22 06:19:59.714 STEP: Collecting events from namespace "autoscaling-5000". 12/01/22 06:19:59.714 STEP: Found 0 events. 12/01/22 06:19:59.755 Dec 1 06:19:59.801: INFO: POD NODE PHASE GRACE CONDITIONS Dec 1 06:19:59.801: INFO: Dec 1 06:19:59.845: INFO: Logging node info for node ca-master Dec 1 06:19:59.886: INFO: Node Info: &Node{ObjectMeta:{ca-master a2126acf-72e0-4c73-a9ef-ce1238132582 20778 0 2022-12-01 04:35:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 04:35:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-12-01 04:35:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-12-01 04:35:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-12-01 06:17:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 04:35:50 +0000 UTC,LastTransitionTime:2022-12-01 04:35:50 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 06:17:50 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 06:17:50 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 06:17:50 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 06:17:50 +0000 UTC,LastTransitionTime:2022-12-01 04:35:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.105.118.216,},NodeAddress{Type:InternalDNS,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bac070a384fefb9133ee7878e15673cf,SystemUUID:bac070a3-84fe-fb91-33ee-7878e15673cf,BootID:39b8786f-3724-43ea-9f9b-9333f7876ff8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/autoscaling/cluster-autoscaler@sha256:07ab8c89cd0ad296ddb6347febe196d8fe0e1c757656a98f71199860d87cf1a5 registry.k8s.io/autoscaling/cluster-autoscaler:v1.22.0],SizeBytes:24220268,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 06:19:59.887: INFO: Logging kubelet events for node ca-master Dec 1 06:19:59.932: INFO: Logging pods the kubelet thinks is on node ca-master Dec 1 06:19:59.993: INFO: metadata-proxy-v0.1-4rrgr started at 2022-12-01 04:35:35 +0000 UTC (0+2 container statuses recorded) Dec 1 06:19:59.993: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 06:19:59.993: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 06:19:59.993: INFO: konnectivity-server-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 06:19:59.993: INFO: Container konnectivity-server-container ready: true, restart count 0 Dec 1 06:19:59.993: INFO: kube-scheduler-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 06:19:59.993: INFO: Container kube-scheduler ready: true, restart count 0 Dec 1 06:19:59.993: INFO: etcd-server-events-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 06:19:59.993: INFO: Container etcd-container ready: true, restart count 0 Dec 1 06:19:59.993: INFO: etcd-server-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 06:19:59.993: INFO: Container etcd-container ready: true, restart count 0 Dec 1 06:19:59.993: INFO: kube-addon-manager-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 06:19:59.993: INFO: Container kube-addon-manager ready: true, restart count 0 Dec 1 06:19:59.993: INFO: cluster-autoscaler-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 06:19:59.993: INFO: Container cluster-autoscaler ready: true, restart count 2 Dec 1 06:19:59.993: INFO: l7-lb-controller-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 06:19:59.993: INFO: Container l7-lb-controller ready: true, restart count 2 Dec 1 06:19:59.993: INFO: kube-apiserver-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 06:19:59.993: INFO: Container kube-apiserver ready: true, restart count 0 Dec 1 06:19:59.993: INFO: kube-controller-manager-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 06:19:59.993: INFO: Container kube-controller-manager ready: true, restart count 1 Dec 1 06:20:00.206: INFO: Latency metrics for node ca-master Dec 1 06:20:00.206: INFO: Logging node info for node ca-minion-group-1-flzm Dec 1 06:20:00.250: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-1-flzm 3671cd64-abed-4025-97f1-6278bc94193f 21125 0 2022-12-01 06:11:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-1-flzm kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 06:11:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-01 06:11:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.21.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-12-01 06:11:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-12-01 06:16:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-01 06:16:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.21.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-1-flzm,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.21.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-12-01 06:16:27 +0000 UTC,LastTransitionTime:2022-12-01 06:11:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-12-01 06:16:27 +0000 UTC,LastTransitionTime:2022-12-01 06:11:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-12-01 06:16:27 +0000 UTC,LastTransitionTime:2022-12-01 06:11:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-12-01 06:16:27 +0000 UTC,LastTransitionTime:2022-12-01 06:11:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-12-01 06:16:27 +0000 UTC,LastTransitionTime:2022-12-01 06:11:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-12-01 06:16:27 +0000 UTC,LastTransitionTime:2022-12-01 06:11:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-12-01 06:16:27 +0000 UTC,LastTransitionTime:2022-12-01 06:11:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 06:11:30 +0000 UTC,LastTransitionTime:2022-12-01 06:11:30 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 06:16:56 +0000 UTC,LastTransitionTime:2022-12-01 06:11:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 06:16:56 +0000 UTC,LastTransitionTime:2022-12-01 06:11:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 06:16:56 +0000 UTC,LastTransitionTime:2022-12-01 06:11:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 06:16:56 +0000 UTC,LastTransitionTime:2022-12-01 06:11:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.24,},NodeAddress{Type:ExternalIP,Address:35.233.150.138,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-1-flzm.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-1-flzm.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:535f589ba7c6f3d50c24252937e19f86,SystemUUID:535f589b-a7c6-f3d5-0c24-252937e19f86,BootID:2d32e708-64ed-4bd0-bf79-02c9b5810272,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 06:20:00.250: INFO: Logging kubelet events for node ca-minion-group-1-flzm Dec 1 06:20:00.298: INFO: Logging pods the kubelet thinks is on node ca-minion-group-1-flzm Dec 1 06:20:05.349: INFO: Unable to retrieve kubelet pods for node ca-minion-group-1-flzm: error trying to reach service: dial tcp 10.138.0.24:10250: i/o timeout Dec 1 06:20:05.349: INFO: Logging node info for node ca-minion-group-vlq2 Dec 1 06:20:05.393: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-vlq2 132befc3-8b36-49c3-8aee-8af679afd99a 20487 0 2022-12-01 05:20:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-vlq2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 05:20:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-01 05:21:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.14.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-12-01 05:21:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-12-01 06:15:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-12-01 06:16:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.14.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-vlq2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.14.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-12-01 06:16:02 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-12-01 06:16:02 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-12-01 06:16:02 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-12-01 06:16:02 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-12-01 06:16:02 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-12-01 06:16:02 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-12-01 06:16:02 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 05:21:02 +0000 UTC,LastTransitionTime:2022-12-01 05:21:02 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 06:15:28 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 06:15:28 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 06:15:28 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 06:15:28 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.16,},NodeAddress{Type:ExternalIP,Address:35.227.188.214,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-vlq2.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-vlq2.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:240e9092aae0ae79fa5461368e619ce5,SystemUUID:240e9092-aae0-ae79-fa54-61368e619ce5,BootID:02338211-5cf4-4ba4-bf8a-82e73c605696,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 06:20:05.393: INFO: Logging kubelet events for node ca-minion-group-vlq2 Dec 1 06:20:05.438: INFO: Logging pods the kubelet thinks is on node ca-minion-group-vlq2 Dec 1 06:20:05.503: INFO: l7-default-backend-8549d69d99-n8nmc started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 06:20:05.503: INFO: Container default-http-backend ready: true, restart count 0 Dec 1 06:20:05.503: INFO: metrics-server-v0.5.2-867b8754b9-pmk4k started at 2022-12-01 05:30:20 +0000 UTC (0+2 container statuses recorded) Dec 1 06:20:05.503: INFO: Container metrics-server ready: true, restart count 1 Dec 1 06:20:05.503: INFO: Container metrics-server-nanny ready: true, restart count 0 Dec 1 06:20:05.503: INFO: kube-proxy-ca-minion-group-vlq2 started at 2022-12-01 05:20:51 +0000 UTC (0+1 container statuses recorded) Dec 1 06:20:05.503: INFO: Container kube-proxy ready: true, restart count 0 Dec 1 06:20:05.503: INFO: coredns-6d97d5ddb-gpg9p started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 06:20:05.503: INFO: Container coredns ready: true, restart count 0 Dec 1 06:20:05.503: INFO: konnectivity-agent-x9vdq started at 2022-12-01 05:21:02 +0000 UTC (0+1 container statuses recorded) Dec 1 06:20:05.503: INFO: Container konnectivity-agent ready: true, restart count 0 Dec 1 06:20:05.503: INFO: volume-snapshot-controller-0 started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 06:20:05.503: INFO: Container volume-snapshot-controller ready: true, restart count 0 Dec 1 06:20:05.503: INFO: metadata-proxy-v0.1-mvw84 started at 2022-12-01 05:20:52 +0000 UTC (0+2 container statuses recorded) Dec 1 06:20:05.503: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 06:20:05.503: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 06:20:05.688: INFO: Latency metrics for node ca-minion-group-vlq2 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "autoscaling-5000" for this suite. 12/01/22 06:20:05.688
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\sadd\snode\sto\sthe\sparticular\smig\s\[Feature\:ClusterSizeAutoscalingScaleUp\]$'
test/e2e/framework/node/helper.go:57 k8s.io/kubernetes/test/e2e/framework/node.AddOrUpdateLabelOnNode({0x801de88?, 0xc002077ba0?}, {0xc0030b2553?, 0x16?}, {0x767f89e?, 0x25?}, {0x75b7b02?, 0x4?}) test/e2e/framework/node/helper.go:57 +0x145 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.18() test/e2e/autoscaling/cluster_size_autoscaling.go:579 +0x4effrom junit_01.xml
[BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 12/01/22 05:56:35.383 Dec 1 05:56:35.383: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename autoscaling 12/01/22 05:56:35.385 STEP: Waiting for a default service account to be provisioned in namespace 12/01/22 05:56:35.512 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 12/01/22 05:56:35.593 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:103 STEP: Initial size of ca-minion-group-1: 0 12/01/22 05:56:39.345 STEP: Initial size of ca-minion-group: 2 12/01/22 05:56:43.42 Dec 1 05:56:43.465: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Initial number of schedulable nodes: 2 12/01/22 05:56:43.511 [It] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp] test/e2e/autoscaling/cluster_size_autoscaling.go:542 STEP: Finding the smallest MIG 12/01/22 05:56:43.511 STEP: Setting size of ca-minion-group-1 to 1 12/01/22 05:56:47.064 Dec 1 05:56:47.064: INFO: Skipping dumping logs from cluster Dec 1 05:56:51.411: INFO: Skipping dumping logs from cluster STEP: Annotating nodes of the smallest MIG(ca-minion-group-1): [ca-minion-group-1-hxzx] 12/01/22 05:56:58.618 Dec 1 05:56:58.662: INFO: Unexpected error: <*errors.StatusError | 0xc003320aa0>: { ErrStatus: code: 404 details: kind: nodes name: ca-minion-group-1-hxzx message: nodes "ca-minion-group-1-hxzx" not found metadata: {} reason: NotFound status: Failure, } Dec 1 05:56:58.662: FAIL: nodes "ca-minion-group-1-hxzx" not found Full Stack Trace k8s.io/kubernetes/test/e2e/framework/node.AddOrUpdateLabelOnNode({0x801de88?, 0xc002077ba0?}, {0xc0030b2553?, 0x16?}, {0x767f89e?, 0x25?}, {0x75b7b02?, 0x4?}) test/e2e/framework/node/helper.go:57 +0x145 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.18() test/e2e/autoscaling/cluster_size_autoscaling.go:579 +0x4ef STEP: Removing labels from nodes 12/01/22 05:56:58.662 STEP: removing the label cluster-autoscaling-test.special-node off the node ca-minion-group-1-hxzx 12/01/22 05:56:58.662 Dec 1 05:56:58.704: INFO: Unexpected error: <*errors.StatusError | 0xc003321220>: { ErrStatus: code: 404 details: kind: nodes name: ca-minion-group-1-hxzx message: nodes "ca-minion-group-1-hxzx" not found metadata: {} reason: NotFound status: Failure, } Dec 1 05:56:58.704: FAIL: nodes "ca-minion-group-1-hxzx" not found Full Stack Trace k8s.io/kubernetes/test/e2e/framework/node.RemoveLabelOffNode({0x801de88, 0xc002077ba0}, {0xc0030b2553, 0x16}, {0x767f89e, 0x25}) test/e2e/framework/node/helper.go:72 +0xf6 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.18.1(0xc0001c7f10?) test/e2e/autoscaling/cluster_size_autoscaling.go:568 +0xb4 panic({0x70eb7e0, 0xc0001c7f10}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc004d64cf0, 0x28}, {0xc002eab890?, 0xc004d64cf0?, 0xc002eab8b8?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fac560, 0xc003320aa0}, {0x0?, 0xc00361d918?, 0xc0da2e6aa4e2b5ed?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/node.AddOrUpdateLabelOnNode({0x801de88?, 0xc002077ba0?}, {0xc0030b2553?, 0x16?}, {0x767f89e?, 0x25?}, {0x75b7b02?, 0x4?}) test/e2e/framework/node/helper.go:57 +0x145 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.18() test/e2e/autoscaling/cluster_size_autoscaling.go:579 +0x4ef [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/node/init/init.go:32 Dec 1 05:56:58.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:139 STEP: Restoring initial size of the cluster 12/01/22 05:56:58.75 STEP: Setting size of ca-minion-group-1 to 0 12/01/22 05:57:02.317 Dec 1 05:57:02.317: INFO: Skipping dumping logs from cluster Dec 1 05:57:07.166: INFO: Skipping dumping logs from cluster Dec 1 05:57:10.881: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Remove taint from node ca-master 12/01/22 05:57:10.926 STEP: Remove taint from node ca-minion-group-m9zb 12/01/22 05:57:10.971 STEP: Remove taint from node ca-minion-group-vlq2 12/01/22 05:57:11.014 I1201 05:57:11.057417 7918 cluster_size_autoscaling.go:165] Made nodes schedulable again in 130.4738ms [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 12/01/22 05:57:11.057 STEP: Collecting events from namespace "autoscaling-6767". 12/01/22 05:57:11.057 STEP: Found 0 events. 12/01/22 05:57:11.098 Dec 1 05:57:11.139: INFO: POD NODE PHASE GRACE CONDITIONS Dec 1 05:57:11.139: INFO: Dec 1 05:57:11.184: INFO: Logging node info for node ca-master Dec 1 05:57:11.228: INFO: Node Info: &Node{ObjectMeta:{ca-master a2126acf-72e0-4c73-a9ef-ce1238132582 15435 0 2022-12-01 04:35:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 04:35:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-12-01 04:35:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-12-01 04:35:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-12-01 05:52:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 04:35:50 +0000 UTC,LastTransitionTime:2022-12-01 04:35:50 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 05:52:19 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 05:52:19 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 05:52:19 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 05:52:19 +0000 UTC,LastTransitionTime:2022-12-01 04:35:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.105.118.216,},NodeAddress{Type:InternalDNS,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bac070a384fefb9133ee7878e15673cf,SystemUUID:bac070a3-84fe-fb91-33ee-7878e15673cf,BootID:39b8786f-3724-43ea-9f9b-9333f7876ff8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/autoscaling/cluster-autoscaler@sha256:07ab8c89cd0ad296ddb6347febe196d8fe0e1c757656a98f71199860d87cf1a5 registry.k8s.io/autoscaling/cluster-autoscaler:v1.22.0],SizeBytes:24220268,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 05:57:11.228: INFO: Logging kubelet events for node ca-master Dec 1 05:57:11.274: INFO: Logging pods the kubelet thinks is on node ca-master Dec 1 05:57:11.340: INFO: kube-scheduler-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 05:57:11.340: INFO: Container kube-scheduler ready: true, restart count 0 Dec 1 05:57:11.340: INFO: metadata-proxy-v0.1-4rrgr started at 2022-12-01 04:35:35 +0000 UTC (0+2 container statuses recorded) Dec 1 05:57:11.340: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 05:57:11.340: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 05:57:11.340: INFO: konnectivity-server-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 05:57:11.340: INFO: Container konnectivity-server-container ready: true, restart count 0 Dec 1 05:57:11.340: INFO: kube-controller-manager-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 05:57:11.340: INFO: Container kube-controller-manager ready: true, restart count 1 Dec 1 05:57:11.340: INFO: etcd-server-events-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 05:57:11.340: INFO: Container etcd-container ready: true, restart count 0 Dec 1 05:57:11.340: INFO: etcd-server-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 05:57:11.340: INFO: Container etcd-container ready: true, restart count 0 Dec 1 05:57:11.340: INFO: kube-addon-manager-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 05:57:11.340: INFO: Container kube-addon-manager ready: true, restart count 0 Dec 1 05:57:11.340: INFO: cluster-autoscaler-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 05:57:11.340: INFO: Container cluster-autoscaler ready: true, restart count 2 Dec 1 05:57:11.340: INFO: l7-lb-controller-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 05:57:11.340: INFO: Container l7-lb-controller ready: true, restart count 2 Dec 1 05:57:11.340: INFO: kube-apiserver-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 05:57:11.340: INFO: Container kube-apiserver ready: true, restart count 0 Dec 1 05:57:11.533: INFO: Latency metrics for node ca-master Dec 1 05:57:11.533: INFO: Logging node info for node ca-minion-group-m9zb Dec 1 05:57:11.577: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-m9zb 4c854cc9-5b08-4d5d-9b2d-526b4e3cf882 16251 0 2022-12-01 05:20:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-m9zb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 05:20:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-01 05:21:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.15.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-12-01 05:21:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-12-01 05:56:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-01 05:57:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.15.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-m9zb,Unschedulable:false,Taints:[]Taint{Taint{Key:DeletionCandidateOfClusterAutoscaler,Value:1669872061,Effect:PreferNoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.15.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-12-01 05:56:01 +0000 UTC,LastTransitionTime:2022-12-01 05:20:56 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-12-01 05:56:01 +0000 UTC,LastTransitionTime:2022-12-01 05:20:56 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-12-01 05:56:01 +0000 UTC,LastTransitionTime:2022-12-01 05:20:56 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-12-01 05:56:01 +0000 UTC,LastTransitionTime:2022-12-01 05:20:56 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-12-01 05:56:01 +0000 UTC,LastTransitionTime:2022-12-01 05:20:56 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-12-01 05:56:01 +0000 UTC,LastTransitionTime:2022-12-01 05:20:56 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-12-01 05:56:01 +0000 UTC,LastTransitionTime:2022-12-01 05:20:56 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 05:21:02 +0000 UTC,LastTransitionTime:2022-12-01 05:21:02 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 05:57:06 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 05:57:06 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 05:57:06 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 05:57:06 +0000 UTC,LastTransitionTime:2022-12-01 05:20:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.17,},NodeAddress{Type:ExternalIP,Address:34.168.125.36,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-m9zb.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-m9zb.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4798b12b4695449fe4795c73cdd4e8ab,SystemUUID:4798b12b-4695-449f-e479-5c73cdd4e8ab,BootID:b9b728c7-5e8b-4849-a085-242c6074e4ad,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 05:57:11.577: INFO: Logging kubelet events for node ca-minion-group-m9zb Dec 1 05:57:11.626: INFO: Logging pods the kubelet thinks is on node ca-minion-group-m9zb Dec 1 05:57:11.702: INFO: konnectivity-agent-kmdrk started at 2022-12-01 05:21:02 +0000 UTC (0+1 container statuses recorded) Dec 1 05:57:11.702: INFO: Container konnectivity-agent ready: true, restart count 0 Dec 1 05:57:11.702: INFO: kube-proxy-ca-minion-group-m9zb started at 2022-12-01 05:20:51 +0000 UTC (0+1 container statuses recorded) Dec 1 05:57:11.702: INFO: Container kube-proxy ready: true, restart count 0 Dec 1 05:57:11.702: INFO: metadata-proxy-v0.1-7cpg9 started at 2022-12-01 05:20:52 +0000 UTC (0+2 container statuses recorded) Dec 1 05:57:11.702: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 05:57:11.702: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 05:57:11.882: INFO: Latency metrics for node ca-minion-group-m9zb Dec 1 05:57:11.882: INFO: Logging node info for node ca-minion-group-vlq2 Dec 1 05:57:11.927: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-vlq2 132befc3-8b36-49c3-8aee-8af679afd99a 16055 0 2022-12-01 05:20:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-vlq2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 05:20:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-01 05:21:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.14.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-12-01 05:21:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-12-01 05:55:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-12-01 05:56:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.14.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-vlq2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.14.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-12-01 05:56:00 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-12-01 05:56:00 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-12-01 05:56:00 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-12-01 05:56:00 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-12-01 05:56:00 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-12-01 05:56:00 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-12-01 05:56:00 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 05:21:02 +0000 UTC,LastTransitionTime:2022-12-01 05:21:02 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 05:55:05 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 05:55:05 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 05:55:05 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 05:55:05 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.16,},NodeAddress{Type:ExternalIP,Address:35.227.188.214,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-vlq2.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-vlq2.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:240e9092aae0ae79fa5461368e619ce5,SystemUUID:240e9092-aae0-ae79-fa54-61368e619ce5,BootID:02338211-5cf4-4ba4-bf8a-82e73c605696,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 05:57:11.927: INFO: Logging kubelet events for node ca-minion-group-vlq2 Dec 1 05:57:11.973: INFO: Logging pods the kubelet thinks is on node ca-minion-group-vlq2 Dec 1 05:57:12.044: INFO: coredns-6d97d5ddb-gpg9p started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 05:57:12.044: INFO: Container coredns ready: true, restart count 0 Dec 1 05:57:12.044: INFO: konnectivity-agent-x9vdq started at 2022-12-01 05:21:02 +0000 UTC (0+1 container statuses recorded) Dec 1 05:57:12.044: INFO: Container konnectivity-agent ready: true, restart count 0 Dec 1 05:57:12.044: INFO: l7-default-backend-8549d69d99-n8nmc started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 05:57:12.044: INFO: Container default-http-backend ready: true, restart count 0 Dec 1 05:57:12.044: INFO: volume-snapshot-controller-0 started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 05:57:12.044: INFO: Container volume-snapshot-controller ready: true, restart count 0 Dec 1 05:57:12.044: INFO: metrics-server-v0.5.2-867b8754b9-pmk4k started at 2022-12-01 05:30:20 +0000 UTC (0+2 container statuses recorded) Dec 1 05:57:12.044: INFO: Container metrics-server ready: true, restart count 1 Dec 1 05:57:12.044: INFO: Container metrics-server-nanny ready: true, restart count 0 Dec 1 05:57:12.044: INFO: kube-proxy-ca-minion-group-vlq2 started at 2022-12-01 05:20:51 +0000 UTC (0+1 container statuses recorded) Dec 1 05:57:12.044: INFO: Container kube-proxy ready: true, restart count 0 Dec 1 05:57:12.044: INFO: metadata-proxy-v0.1-mvw84 started at 2022-12-01 05:20:52 +0000 UTC (0+2 container statuses recorded) Dec 1 05:57:12.044: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 05:57:12.044: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 05:57:12.229: INFO: Latency metrics for node ca-minion-group-vlq2 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "autoscaling-6767" for this suite. 12/01/22 05:57:12.229
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\sbe\sable\sto\sscale\sdown\sby\sdraining\ssystem\spods\swith\spdb\[Feature\:ClusterSizeAutoscalingScaleDown\]$'
test/e2e/autoscaling/cluster_size_autoscaling.go:748 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 +0x94 k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 +0x842 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 +0x57from junit_01.xml
[BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 12/01/22 06:20:05.736 Dec 1 06:20:05.736: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename autoscaling 12/01/22 06:20:05.738 STEP: Waiting for a default service account to be provisioned in namespace 12/01/22 06:20:05.866 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 12/01/22 06:20:05.948 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:103 STEP: Initial size of ca-minion-group-1: 0 12/01/22 06:20:09.635 STEP: Initial size of ca-minion-group: 2 12/01/22 06:20:13.212 Dec 1 06:20:13.258: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Initial number of schedulable nodes: 2 12/01/22 06:20:13.301 [It] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] test/e2e/autoscaling/cluster_size_autoscaling.go:745 STEP: Manually increase cluster size 12/01/22 06:20:13.301 STEP: Setting size of ca-minion-group to 4 12/01/22 06:20:16.814 Dec 1 06:20:16.814: INFO: Skipping dumping logs from cluster Dec 1 06:20:21.544: INFO: Skipping dumping logs from cluster STEP: Setting size of ca-minion-group-1 to 2 12/01/22 06:20:25.022 Dec 1 06:20:25.022: INFO: Skipping dumping logs from cluster Dec 1 06:20:29.825: INFO: Skipping dumping logs from cluster STEP: Setting size of ca-minion-group-1 to 2 12/01/22 06:20:33.392 Dec 1 06:20:33.393: INFO: Skipping dumping logs from cluster Dec 1 06:20:37.987: INFO: Skipping dumping logs from cluster W1201 06:20:41.506783 7918 cluster_size_autoscaling.go:1758] Unexpected node group size while waiting for cluster resize. Setting size to target again. I1201 06:20:41.506829 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 Dec 1 06:21:01.555: INFO: Condition Ready of node ca-minion-group-1-flzm is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:20:31 +0000 UTC} {DeletionCandidateOfClusterAutoscaler 1669875632 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:20:36 +0000 UTC}]. Failure I1201 06:21:01.555891 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 1 Dec 1 06:21:21.605: INFO: Condition Ready of node ca-minion-group-1-flzm is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:20:31 +0000 UTC} {DeletionCandidateOfClusterAutoscaler 1669875632 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:20:36 +0000 UTC}]. Failure I1201 06:21:21.605370 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 1 Dec 1 06:21:41.653: INFO: Condition Ready of node ca-minion-group-1-flzm is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:20:31 +0000 UTC} {DeletionCandidateOfClusterAutoscaler 1669875632 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:20:36 +0000 UTC}]. Failure Dec 1 06:21:41.653: INFO: Condition Ready of node ca-minion-group-1-rjz0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/network-unavailable NoSchedule 2022-12-01 06:21:40 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2022-12-01 06:21:41 +0000 UTC}]. Failure I1201 06:21:41.653590 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 7, not ready nodes 2 I1201 06:22:01.699280 7918 cluster_size_autoscaling.go:1381] Cluster has reached the desired size STEP: Run a pod on each node 12/01/22 06:22:01.745 STEP: Taint node ca-minion-group-1-qwc8 12/01/22 06:22:01.746 STEP: Taint node ca-minion-group-1-rjz0 12/01/22 06:22:01.845 STEP: Taint node ca-minion-group-9089 12/01/22 06:22:01.951 STEP: Taint node ca-minion-group-h5sk 12/01/22 06:22:02.048 STEP: Taint node ca-minion-group-r6jd 12/01/22 06:22:02.14 STEP: Taint node ca-minion-group-vlq2 12/01/22 06:22:02.239 STEP: creating replication controller reschedulable-pods in namespace kube-system 12/01/22 06:22:02.336 I1201 06:22:02.381984 7918 runners.go:193] Created replication controller with name: reschedulable-pods, namespace: kube-system, replica count: 0 STEP: Remove taint from node ca-minion-group-1-qwc8 12/01/22 06:22:02.473 STEP: Taint node ca-minion-group-1-qwc8 12/01/22 06:22:07.705 STEP: Remove taint from node ca-minion-group-1-rjz0 12/01/22 06:22:07.798 STEP: Taint node ca-minion-group-1-rjz0 12/01/22 06:22:13.032 STEP: Remove taint from node ca-minion-group-9089 12/01/22 06:22:13.127 STEP: Taint node ca-minion-group-9089 12/01/22 06:22:18.367 STEP: Remove taint from node ca-minion-group-h5sk 12/01/22 06:22:18.461 STEP: Taint node ca-minion-group-h5sk 12/01/22 06:22:23.703 STEP: Remove taint from node ca-minion-group-r6jd 12/01/22 06:22:23.806 STEP: Taint node ca-minion-group-r6jd 12/01/22 06:22:29.068 STEP: Remove taint from node ca-minion-group-vlq2 12/01/22 06:22:29.163 STEP: Taint node ca-minion-group-vlq2 12/01/22 06:22:34.41 STEP: Remove taint from node ca-minion-group-vlq2 12/01/22 06:22:34.505 STEP: Remove taint from node ca-minion-group-r6jd 12/01/22 06:22:34.603 STEP: Remove taint from node ca-minion-group-h5sk 12/01/22 06:22:34.699 STEP: Remove taint from node ca-minion-group-9089 12/01/22 06:22:34.795 STEP: Remove taint from node ca-minion-group-1-rjz0 12/01/22 06:22:34.892 STEP: Remove taint from node ca-minion-group-1-qwc8 12/01/22 06:22:34.993 STEP: Create a PodDisruptionBudget 12/01/22 06:22:35.089 STEP: Some node should be removed 12/01/22 06:22:35.135 I1201 06:22:35.182202 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1201 06:22:55.233451 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1201 06:23:15.283310 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1201 06:23:35.333592 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1201 06:23:55.379744 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1201 06:24:15.438175 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1201 06:24:35.484198 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1201 06:24:55.531851 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m7.566s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 5m0s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 2m38.167s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:25:15.576683 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m27.567s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 5m20.002s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 2m58.169s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:25:35.624700 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m47.57s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 5m40.004s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 3m18.171s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep, 2 minutes] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:25:55.673796 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m7.571s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 6m0.006s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 3m38.172s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:26:15.723106 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m27.573s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 6m20.008s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 3m58.175s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:26:35.775092 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m47.576s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 6m40.01s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 4m18.177s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:26:55.822044 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m7.578s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 7m0.012s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 4m38.179s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:27:15.871523 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m27.579s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 7m20.013s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 4m58.18s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:27:35.919310 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m47.58s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 7m40.015s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 5m18.182s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep, 2 minutes] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:27:55.967686 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m7.582s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 8m0.017s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 5m38.184s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:28:16.042252 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m27.585s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 8m20.02s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 5m58.186s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:28:36.091351 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m47.586s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 8m40.021s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 6m18.187s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:28:56.137727 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m7.587s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 9m0.022s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 6m38.188s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:29:16.186259 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m27.588s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 9m20.023s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 6m58.189s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:29:36.238029 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m47.59s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 9m40.024s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 7m18.191s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep, 2 minutes] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:29:56.286964 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m7.591s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 10m0.025s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 7m38.192s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:30:16.336571 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m27.592s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 10m20.026s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 7m58.193s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:30:36.382824 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m47.593s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 10m40.027s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 8m18.194s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:30:56.428844 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m7.595s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 11m0.03s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 8m38.196s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:31:16.478327 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m27.596s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 11m20.03s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 8m58.197s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:31:36.526294 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m47.597s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 11m40.032s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 9m18.198s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep, 2 minutes] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:31:56.576255 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m7.598s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 12m0.033s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 9m38.2s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:32:16.625031 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m27.6s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 12m20.035s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 9m58.201s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:32:36.671225 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m47.601s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 12m40.036s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 10m18.202s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:32:56.721553 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m7.603s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 13m0.038s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 10m38.204s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:33:16.770211 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m27.604s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 13m20.039s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 10m58.205s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:33:36.818183 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m47.605s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 13m40.04s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 11m18.206s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:33:56.865687 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m7.607s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 14m0.042s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 11m38.208s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:34:16.913538 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m27.61s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 14m20.044s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 11m58.211s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:34:36.961024 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m47.611s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 14m40.046s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 12m18.212s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:34:57.009050 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m7.613s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 15m0.048s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 12m38.214s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:35:17.055836 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m27.615s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 15m20.049s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 12m58.216s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:35:37.102772 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m47.615s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 15m40.05s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 13m18.217s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:35:57.149848 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m7.617s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 16m0.052s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 13m38.219s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:36:17.197739 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m27.619s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 16m20.054s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 13m58.22s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:36:37.244065 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m47.62s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 16m40.055s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 14m18.221s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:36:57.291491 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m7.622s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 17m0.057s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 14m38.223s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:37:17.339453 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m27.624s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 17m20.059s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 14m58.225s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:37:37.385260 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m47.625s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 17m40.06s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 15m18.227s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:37:57.431613 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m7.627s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 18m0.062s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 15m38.228s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:38:17.480297 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m27.628s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 18m20.063s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 15m58.23s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:38:37.532232 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m47.63s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 18m40.065s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 16m18.232s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:38:57.586715 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m7.632s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 19m0.067s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 16m38.234s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:39:17.638128 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m27.634s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 19m20.069s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 16m58.236s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:39:37.695732 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m47.635s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 19m40.07s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 17m18.237s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:39:57.757890 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 20m7.637s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 20m0.072s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 17m38.239s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:40:17.807954 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 20m27.639s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 20m20.073s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 17m58.24s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:40:37.856537 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 20m47.641s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 20m40.076s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 18m18.242s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:40:57.903558 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 21m7.642s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 21m0.077s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 18m38.243s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:41:17.951318 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 21m27.643s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 21m20.078s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 18m58.244s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:41:37.998822 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 21m47.644s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 21m40.079s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 19m18.245s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:41:58.046466 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 22m7.645s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 22m0.08s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 19m38.246s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 06:42:18.093487 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 22m27.647s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 22m20.082s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 19m58.249s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 5858 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc0034724e0}, 0xc003765ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003a54000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Dec 1 06:42:38.094: INFO: Unexpected error: <*errors.errorString | 0xc0007ba000>: { s: "timeout waiting 20m0s for appropriate cluster size", } Dec 1 06:42:38.094: FAIL: timeout waiting 20m0s for appropriate cluster size Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 +0x94 k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc003923f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 +0x842 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 +0x57 STEP: deleting ReplicationController reschedulable-pods in namespace kube-system, will wait for the garbage collector to delete the pods 12/01/22 06:42:38.14 Dec 1 06:42:38.278: INFO: Deleting ReplicationController reschedulable-pods took: 45.110776ms Dec 1 06:42:38.479: INFO: Terminating ReplicationController reschedulable-pods pods took: 201.005634ms [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/node/init/init.go:32 Dec 1 06:42:39.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:139 STEP: Restoring initial size of the cluster 12/01/22 06:42:39.728 STEP: Setting size of ca-minion-group-1 to 0 12/01/22 06:42:43.615 Dec 1 06:42:43.616: INFO: Skipping dumping logs from cluster Dec 1 06:42:48.161: INFO: Skipping dumping logs from cluster STEP: Setting size of ca-minion-group to 2 12/01/22 06:42:51.702 Dec 1 06:42:51.702: INFO: Skipping dumping logs from cluster Dec 1 06:42:56.260: INFO: Skipping dumping logs from cluster Dec 1 06:42:56.307: INFO: Waiting for ready nodes 2, current ready 6, not ready nodes 0 Dec 1 06:43:16.358: INFO: Waiting for ready nodes 2, current ready 6, not ready nodes 0 Dec 1 06:43:36.409: INFO: Condition Ready of node ca-minion-group-1-qwc8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:43:26 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:43:31 +0000 UTC}]. Failure Dec 1 06:43:36.409: INFO: Condition Ready of node ca-minion-group-1-rjz0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 06:43:36.409: INFO: Waiting for ready nodes 2, current ready 4, not ready nodes 2 Dec 1 06:43:56.462: INFO: Condition Ready of node ca-minion-group-1-qwc8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:43:26 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:43:31 +0000 UTC}]. Failure Dec 1 06:43:56.462: INFO: Condition Ready of node ca-minion-group-1-rjz0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:43:31 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:43:42 +0000 UTC}]. Failure Dec 1 06:43:56.462: INFO: Condition Ready of node ca-minion-group-9089 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 06:43:56.462: INFO: Condition Ready of node ca-minion-group-h5sk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 06:43:56.462: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 4 Dec 1 06:44:16.512: INFO: Condition Ready of node ca-minion-group-1-qwc8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:43:26 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:43:31 +0000 UTC}]. Failure Dec 1 06:44:16.512: INFO: Condition Ready of node ca-minion-group-1-rjz0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:43:31 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:43:42 +0000 UTC}]. Failure Dec 1 06:44:16.512: INFO: Condition Ready of node ca-minion-group-9089 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 06:44:16.512: INFO: Condition Ready of node ca-minion-group-h5sk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 06:44:16.512: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 4 Dec 1 06:44:36.566: INFO: Condition Ready of node ca-minion-group-1-qwc8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:43:26 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:43:31 +0000 UTC}]. Failure Dec 1 06:44:36.566: INFO: Condition Ready of node ca-minion-group-1-rjz0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:43:31 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:43:42 +0000 UTC}]. Failure Dec 1 06:44:36.566: INFO: Condition Ready of node ca-minion-group-9089 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 06:44:36.566: INFO: Condition Ready of node ca-minion-group-h5sk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 06:44:36.566: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 4 Dec 1 06:44:56.617: INFO: Condition Ready of node ca-minion-group-1-qwc8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:43:26 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:43:31 +0000 UTC}]. Failure Dec 1 06:44:56.618: INFO: Condition Ready of node ca-minion-group-1-rjz0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:43:31 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:43:42 +0000 UTC}]. Failure Dec 1 06:44:56.618: INFO: Condition Ready of node ca-minion-group-9089 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 06:44:56.618: INFO: Condition Ready of node ca-minion-group-h5sk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 06:44:56.618: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 4 Dec 1 06:45:16.668: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Remove taint from node ca-master 12/01/22 06:45:16.712 STEP: Remove taint from node ca-minion-group-r6jd 12/01/22 06:45:16.756 STEP: Remove taint from node ca-minion-group-vlq2 12/01/22 06:45:16.8 I1201 06:45:16.845142 7918 cluster_size_autoscaling.go:165] Made nodes schedulable again in 132.520519ms [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 12/01/22 06:45:16.845 STEP: Collecting events from namespace "autoscaling-400". 12/01/22 06:45:16.845 STEP: Found 0 events. 12/01/22 06:45:16.888 Dec 1 06:45:16.930: INFO: POD NODE PHASE GRACE CONDITIONS Dec 1 06:45:16.930: INFO: Dec 1 06:45:16.973: INFO: Logging node info for node ca-master Dec 1 06:45:17.014: INFO: Node Info: &Node{ObjectMeta:{ca-master a2126acf-72e0-4c73-a9ef-ce1238132582 25713 0 2022-12-01 04:35:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 04:35:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-12-01 04:35:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-12-01 04:35:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-12-01 06:43:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 04:35:50 +0000 UTC,LastTransitionTime:2022-12-01 04:35:50 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 06:43:22 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 06:43:22 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 06:43:22 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 06:43:22 +0000 UTC,LastTransitionTime:2022-12-01 04:35:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.105.118.216,},NodeAddress{Type:InternalDNS,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bac070a384fefb9133ee7878e15673cf,SystemUUID:bac070a3-84fe-fb91-33ee-7878e15673cf,BootID:39b8786f-3724-43ea-9f9b-9333f7876ff8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/autoscaling/cluster-autoscaler@sha256:07ab8c89cd0ad296ddb6347febe196d8fe0e1c757656a98f71199860d87cf1a5 registry.k8s.io/autoscaling/cluster-autoscaler:v1.22.0],SizeBytes:24220268,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 06:45:17.015: INFO: Logging kubelet events for node ca-master Dec 1 06:45:17.061: INFO: Logging pods the kubelet thinks is on node ca-master Dec 1 06:45:17.122: INFO: konnectivity-server-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 06:45:17.122: INFO: Container konnectivity-server-container ready: true, restart count 0 Dec 1 06:45:17.122: INFO: kube-scheduler-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 06:45:17.122: INFO: Container kube-scheduler ready: true, restart count 0 Dec 1 06:45:17.122: INFO: metadata-proxy-v0.1-4rrgr started at 2022-12-01 04:35:35 +0000 UTC (0+2 container statuses recorded) Dec 1 06:45:17.122: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 06:45:17.122: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 06:45:17.122: INFO: l7-lb-controller-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 06:45:17.122: INFO: Container l7-lb-controller ready: true, restart count 2 Dec 1 06:45:17.122: INFO: kube-apiserver-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 06:45:17.122: INFO: Container kube-apiserver ready: true, restart count 0 Dec 1 06:45:17.122: INFO: kube-controller-manager-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 06:45:17.122: INFO: Container kube-controller-manager ready: true, restart count 1 Dec 1 06:45:17.122: INFO: etcd-server-events-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 06:45:17.122: INFO: Container etcd-container ready: true, restart count 0 Dec 1 06:45:17.122: INFO: etcd-server-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 06:45:17.122: INFO: Container etcd-container ready: true, restart count 0 Dec 1 06:45:17.122: INFO: kube-addon-manager-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 06:45:17.122: INFO: Container kube-addon-manager ready: true, restart count 0 Dec 1 06:45:17.122: INFO: cluster-autoscaler-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 06:45:17.122: INFO: Container cluster-autoscaler ready: true, restart count 2 Dec 1 06:45:17.331: INFO: Latency metrics for node ca-master Dec 1 06:45:17.331: INFO: Logging node info for node ca-minion-group-r6jd Dec 1 06:45:17.373: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-r6jd ce3a6702-2630-4835-af73-9b154fcd1867 25608 0 2022-12-01 06:21:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-r6jd kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 06:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-01 06:21:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.22.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-12-01 06:21:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-12-01 06:41:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-01 06:42:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {cluster-autoscaler Update v1 2022-12-01 06:42:43 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}} }]},Spec:NodeSpec{PodCIDR:10.64.22.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-r6jd,Unschedulable:false,Taints:[]Taint{Taint{Key:DeletionCandidateOfClusterAutoscaler,Value:1669876963,Effect:PreferNoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.22.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-12-01 06:41:17 +0000 UTC,LastTransitionTime:2022-12-01 06:21:13 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-12-01 06:41:17 +0000 UTC,LastTransitionTime:2022-12-01 06:21:13 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-12-01 06:41:17 +0000 UTC,LastTransitionTime:2022-12-01 06:21:13 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-12-01 06:41:17 +0000 UTC,LastTransitionTime:2022-12-01 06:21:13 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-12-01 06:41:17 +0000 UTC,LastTransitionTime:2022-12-01 06:21:13 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-12-01 06:41:17 +0000 UTC,LastTransitionTime:2022-12-01 06:21:13 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-12-01 06:41:17 +0000 UTC,LastTransitionTime:2022-12-01 06:21:13 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 06:21:20 +0000 UTC,LastTransitionTime:2022-12-01 06:21:20 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 06:42:04 +0000 UTC,LastTransitionTime:2022-12-01 06:21:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 06:42:04 +0000 UTC,LastTransitionTime:2022-12-01 06:21:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 06:42:04 +0000 UTC,LastTransitionTime:2022-12-01 06:21:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 06:42:04 +0000 UTC,LastTransitionTime:2022-12-01 06:21:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.25,},NodeAddress{Type:ExternalIP,Address:34.168.253.178,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-r6jd.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-r6jd.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:62ad827b13e1a1e1081beb1fca51b760,SystemUUID:62ad827b-13e1-a1e1-081b-eb1fca51b760,BootID:a28b8faf-30af-4c66-8463-0be07845e173,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 06:45:17.373: INFO: Logging kubelet events for node ca-minion-group-r6jd Dec 1 06:45:17.419: INFO: Logging pods the kubelet thinks is on node ca-minion-group-r6jd Dec 1 06:45:17.482: INFO: kube-proxy-ca-minion-group-r6jd started at 2022-12-01 06:21:08 +0000 UTC (0+1 container statuses recorded) Dec 1 06:45:17.482: INFO: Container kube-proxy ready: true, restart count 0 Dec 1 06:45:17.482: INFO: metadata-proxy-v0.1-w6md5 started at 2022-12-01 06:21:09 +0000 UTC (0+2 container statuses recorded) Dec 1 06:45:17.482: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 06:45:17.482: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 06:45:17.482: INFO: konnectivity-agent-zttnt started at 2022-12-01 06:21:20 +0000 UTC (0+1 container statuses recorded) Dec 1 06:45:17.482: INFO: Container konnectivity-agent ready: true, restart count 0 Dec 1 06:45:17.651: INFO: Latency metrics for node ca-minion-group-r6jd Dec 1 06:45:17.651: INFO: Logging node info for node ca-minion-group-vlq2 Dec 1 06:45:17.694: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-vlq2 132befc3-8b36-49c3-8aee-8af679afd99a 25255 0 2022-12-01 05:20:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-vlq2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 05:20:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-01 05:21:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.14.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-12-01 05:21:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-12-01 06:40:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-12-01 06:41:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.14.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-vlq2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.14.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-12-01 06:41:05 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-12-01 06:41:05 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-12-01 06:41:05 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-12-01 06:41:05 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-12-01 06:41:05 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-12-01 06:41:05 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-12-01 06:41:05 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 05:21:02 +0000 UTC,LastTransitionTime:2022-12-01 05:21:02 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 06:40:58 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 06:40:58 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 06:40:58 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 06:40:58 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.16,},NodeAddress{Type:ExternalIP,Address:35.227.188.214,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-vlq2.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-vlq2.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:240e9092aae0ae79fa5461368e619ce5,SystemUUID:240e9092-aae0-ae79-fa54-61368e619ce5,BootID:02338211-5cf4-4ba4-bf8a-82e73c605696,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 06:45:17.694: INFO: Logging kubelet events for node ca-minion-group-vlq2 Dec 1 06:45:17.740: INFO: Logging pods the kubelet thinks is on node ca-minion-group-vlq2 Dec 1 06:45:17.804: INFO: l7-default-backend-8549d69d99-n8nmc started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 06:45:17.804: INFO: Container default-http-backend ready: true, restart count 0 Dec 1 06:45:17.804: INFO: metrics-server-v0.5.2-867b8754b9-pmk4k started at 2022-12-01 05:30:20 +0000 UTC (0+2 container statuses recorded) Dec 1 06:45:17.804: INFO: Container metrics-server ready: true, restart count 1 Dec 1 06:45:17.804: INFO: Container metrics-server-nanny ready: true, restart count 0 Dec 1 06:45:17.804: INFO: kube-proxy-ca-minion-group-vlq2 started at 2022-12-01 05:20:51 +0000 UTC (0+1 container statuses recorded) Dec 1 06:45:17.804: INFO: Container kube-proxy ready: true, restart count 0 Dec 1 06:45:17.804: INFO: coredns-6d97d5ddb-gpg9p started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 06:45:17.804: INFO: Container coredns ready: true, restart count 0 Dec 1 06:45:17.804: INFO: konnectivity-agent-x9vdq started at 2022-12-01 05:21:02 +0000 UTC (0+1 container statuses recorded) Dec 1 06:45:17.804: INFO: Container konnectivity-agent ready: true, restart count 0 Dec 1 06:45:17.804: INFO: volume-snapshot-controller-0 started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 06:45:17.804: INFO: Container volume-snapshot-controller ready: true, restart count 0 Dec 1 06:45:17.804: INFO: metadata-proxy-v0.1-mvw84 started at 2022-12-01 05:20:52 +0000 UTC (0+2 container statuses recorded) Dec 1 06:45:17.804: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 06:45:17.804: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 06:45:17.979: INFO: Latency metrics for node ca-minion-group-vlq2 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "autoscaling-400" for this suite. 12/01/22 06:45:17.979
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\sscale\sdown\swhen\sexpendable\spod\sis\srunning\s\[Feature\:ClusterSizeAutoscalingScaleDown\]$'
test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 +0x1bcfrom junit_01.xml
[BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 12/01/22 08:22:54.277 Dec 1 08:22:54.277: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename autoscaling 12/01/22 08:22:54.278 STEP: Waiting for a default service account to be provisioned in namespace 12/01/22 08:22:54.406 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 12/01/22 08:22:54.489 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:103 STEP: Initial size of ca-minion-group-1: 0 12/01/22 08:22:58.428 STEP: Initial size of ca-minion-group: 2 12/01/22 08:23:01.878 Dec 1 08:23:01.923: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Initial number of schedulable nodes: 2 12/01/22 08:23:01.966 [It] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] test/e2e/autoscaling/cluster_size_autoscaling.go:985 STEP: Manually increase cluster size 12/01/22 08:23:02.056 STEP: Setting size of ca-minion-group to 4 12/01/22 08:23:05.532 Dec 1 08:23:05.532: INFO: Skipping dumping logs from cluster Dec 1 08:23:11.252: INFO: Skipping dumping logs from cluster STEP: Setting size of ca-minion-group-1 to 2 12/01/22 08:23:14.832 Dec 1 08:23:14.833: INFO: Skipping dumping logs from cluster Dec 1 08:23:20.057: INFO: Skipping dumping logs from cluster STEP: Setting size of ca-minion-group to 4 12/01/22 08:23:23.606 Dec 1 08:23:23.606: INFO: Skipping dumping logs from cluster Dec 1 08:23:28.767: INFO: Skipping dumping logs from cluster W1201 08:23:32.404133 7918 cluster_size_autoscaling.go:1758] Unexpected node group size while waiting for cluster resize. Setting size to target again. I1201 08:23:32.404184 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1201 08:24:01.935736 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1201 08:24:30.254460 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1201 08:24:50.304778 7918 cluster_size_autoscaling.go:1381] Cluster has reached the desired size STEP: Running RC which reserves 30252 MB of memory 12/01/22 08:24:50.304 STEP: creating replication controller memory-reservation in namespace autoscaling-2417 12/01/22 08:24:50.304 I1201 08:24:50.354416 7918 runners.go:193] Created replication controller with name: memory-reservation, namespace: autoscaling-2417, replica count: 6 I1201 08:25:00.409281 7918 runners.go:193] memory-reservation Pods: 6 out of 6 created, 6 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: Waiting for scale down 12/01/22 08:25:00.409 I1201 08:25:00.457437 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1201 08:25:20.505893 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1201 08:25:40.554841 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1201 08:26:00.602959 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1201 08:26:20.693340 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1201 08:26:40.742092 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1201 08:27:00.790561 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1201 08:27:20.841444 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1201 08:27:40.890249 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1201 08:28:00.936646 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m7.69s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 5m0.001s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 3m1.558s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc004577f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:28:20.987729 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m27.692s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 5m20.002s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 3m21.56s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc004577f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:28:41.037279 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m47.693s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 5m40.004s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 3m41.561s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc004577f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:29:01.084791 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m7.694s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 6m0.005s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 4m1.562s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc004577f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:29:21.134888 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m27.697s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 6m20.008s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 4m21.565s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc004577f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:29:41.182858 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m47.698s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 6m40.009s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 4m41.566s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc004577f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Dec 1 08:30:01.234: INFO: Condition Ready of node ca-minion-group-sthz is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669881908 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669883341 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 08:29:45 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 08:29:50 +0000 UTC}]. Failure I1201 08:30:01.234724 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m7.699s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 7m0.01s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 5m1.567s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc001687f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Dec 1 08:30:21.282: INFO: Condition Ready of node ca-minion-group-sthz is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669881908 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669883341 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 08:29:45 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 08:29:50 +0000 UTC}]. Failure I1201 08:30:21.282257 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m27.701s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 7m20.012s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 5m21.569s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc001687f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Dec 1 08:30:41.329: INFO: Condition Ready of node ca-minion-group-sthz is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669881908 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669883341 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 08:29:45 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 08:29:50 +0000 UTC}]. Failure I1201 08:30:41.329971 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m47.702s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 7m40.013s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 5m41.57s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc001687f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Dec 1 08:31:01.380: INFO: Condition Ready of node ca-minion-group-sthz is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669881908 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669883341 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 08:29:45 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 08:29:50 +0000 UTC}]. Failure I1201 08:31:01.380075 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m7.704s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 8m0.015s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 6m1.572s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc001687f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:31:21.429631 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m27.705s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 8m20.016s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 6m21.573s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc001687f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:31:41.478356 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m47.707s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 8m40.018s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 6m41.575s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc001687f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:32:01.523304 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m7.708s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 9m0.019s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 7m1.576s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc001687f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:32:21.569026 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m27.71s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 9m20.02s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 7m21.577s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc001687f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:32:41.617416 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m47.712s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 9m40.022s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 7m41.579s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc001687f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:33:01.666832 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m7.713s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 10m0.024s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 8m1.581s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc001687f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:33:21.713303 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m27.714s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 10m20.025s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 8m21.582s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc001687f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:33:41.759578 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m47.716s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 10m40.026s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 8m41.584s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc001687f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:34:01.805230 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m7.717s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 11m0.028s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 9m1.585s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca9f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:34:21.853388 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m27.719s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 11m20.03s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 9m21.587s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca9f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:34:41.900185 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m47.721s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 11m40.031s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 9m41.589s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca9f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:35:01.948245 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m7.722s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 12m0.032s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 10m1.589s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca9f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Dec 1 08:35:21.995: INFO: Condition Ready of node ca-minion-group-rmxn is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669883069 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669883674 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 08:35:10 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 08:35:15 +0000 UTC}]. Failure I1201 08:35:21.995880 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m27.723s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 12m20.034s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 10m21.591s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc00361df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m47.725s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 12m40.036s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 10m41.593s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000584c00, 0xc001102d00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003b9ea00, 0xc001102d00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003bdc000?}, 0xc001102d00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003bdc000, 0xc001102d00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc003d5b740?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0042becc0, 0xc001102c00) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc00134f3c0, 0xc001102b00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc001102b00, {0x7fad100, 0xc00134f3c0}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0042becf0, 0xc001102b00, {0x7f6aec510a68?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0042becf0, 0xc001102b00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc001102900, {0x7fe0bc8, 0xc000136008}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc001102900, {0x7fe0bc8, 0xc000136008}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*nodes).List(0xc000ebdd40, {0x7fe0bc8, 0xc000136008}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, 0x0}, {0xc0011dbdb8, ...}, ...}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/node.go:93 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc00361df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1365 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Dec 1 08:35:42.042: INFO: Condition Ready of node ca-minion-group-1-3m6g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:35:42.042: INFO: Condition Ready of node ca-minion-group-1-6h5h is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:35:42.042: INFO: Condition Ready of node ca-minion-group-l31t is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:35:42.042: INFO: Condition Ready of node ca-minion-group-rmxn is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669883069 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669883674 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 08:35:10 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 08:35:15 +0000 UTC}]. Failure I1201 08:35:42.042973 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 4 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m7.727s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 13m0.037s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 11m1.594s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep, 2 minutes] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca5f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Dec 1 08:36:02.089: INFO: Condition Ready of node ca-minion-group-1-3m6g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:36:02.089: INFO: Condition Ready of node ca-minion-group-1-6h5h is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:36:02.089: INFO: Condition Ready of node ca-minion-group-l31t is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:36:02.089: INFO: Condition Ready of node ca-minion-group-rmxn is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669883069 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669883674 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 08:35:10 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 08:35:15 +0000 UTC}]. Failure I1201 08:36:02.089533 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 4 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m27.728s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 13m20.038s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 11m21.595s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc00168ff38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Dec 1 08:36:22.135: INFO: Condition Ready of node ca-minion-group-1-3m6g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:36:22.135: INFO: Condition Ready of node ca-minion-group-1-6h5h is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:36:22.135: INFO: Condition Ready of node ca-minion-group-l31t is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:36:22.135: INFO: Condition Ready of node ca-minion-group-rmxn is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669883069 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669883674 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 08:35:10 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 08:35:15 +0000 UTC}]. Failure I1201 08:36:22.135425 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 4 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m47.729s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 13m40.04s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 11m41.597s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc00168ff38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Dec 1 08:36:42.181: INFO: Condition Ready of node ca-minion-group-1-3m6g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:36:42.181: INFO: Condition Ready of node ca-minion-group-1-6h5h is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:36:42.181: INFO: Condition Ready of node ca-minion-group-l31t is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:36:42.181: INFO: Condition Ready of node ca-minion-group-rmxn is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669883069 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669883674 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 08:35:10 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 08:35:15 +0000 UTC}]. Failure I1201 08:36:42.181149 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 4 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m7.731s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 14m0.042s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 12m1.599s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc00168ff38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:37:02.225506 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m27.732s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 14m20.043s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 12m21.6s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc00168ff38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:37:22.270506 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m47.733s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 14m40.044s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 12m41.601s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc00168ff38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:37:42.314472 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m7.734s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 15m0.045s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 13m1.602s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc00168ff38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:38:02.358943 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m27.74s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 15m20.051s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 13m21.608s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:38:22.407700 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m47.741s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 15m40.052s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 13m41.609s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:38:42.453360 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m7.743s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 16m0.054s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 14m1.611s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:39:02.497783 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m27.744s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 16m20.055s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 14m21.612s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:39:22.542865 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m47.746s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 16m40.057s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 14m41.614s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:39:42.589206 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m7.748s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 17m0.059s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 15m1.616s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:40:02.635995 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m27.75s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 17m20.06s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 15m21.618s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:40:22.681412 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m47.751s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 17m40.062s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 15m41.619s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:40:42.727203 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m7.752s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 18m0.063s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 16m1.62s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:41:02.771926 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m27.753s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 18m20.064s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 16m21.621s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:41:22.849866 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m47.755s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 18m40.066s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 16m41.623s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:41:42.895564 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m7.756s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 19m0.067s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 17m1.624s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:42:02.939958 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m27.758s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 19m20.069s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 17m21.626s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:42:22.985531 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m47.759s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 19m40.07s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 17m41.627s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:42:43.030248 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 20m7.761s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 20m0.072s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 18m1.629s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:43:03.075993 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 20m27.763s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 20m20.074s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 18m21.631s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:43:23.119842 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 20m47.764s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 20m40.075s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 18m41.632s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:43:43.163594 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 21m7.765s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 21m0.076s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 19m1.633s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:44:03.209534 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 21m27.767s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 21m20.078s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 19m21.635s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:44:23.298677 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 21m47.768s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 21m40.079s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 19m41.636s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1201 08:44:43.342738 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 22m7.77s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 22m0.08s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 20m1.637s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 11138 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002077ba0}, 0xc000ca3f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004078600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Dec 1 08:45:03.343: INFO: Unexpected error: <*errors.errorString | 0xc004076360>: { s: "timeout waiting 20m0s for appropriate cluster size", } Dec 1 08:45:03.343: FAIL: timeout waiting 20m0s for appropriate cluster size Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 +0x1bc STEP: deleting ReplicationController memory-reservation in namespace autoscaling-2417, will wait for the garbage collector to delete the pods 12/01/22 08:45:03.343 Dec 1 08:45:03.483: INFO: Deleting ReplicationController memory-reservation took: 44.769387ms Dec 1 08:45:03.583: INFO: Terminating ReplicationController memory-reservation pods took: 100.371553ms [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/node/init/init.go:32 Dec 1 08:45:04.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:139 STEP: Restoring initial size of the cluster 12/01/22 08:45:04.216 STEP: Setting size of ca-minion-group to 2 12/01/22 08:45:18.227 Dec 1 08:45:18.227: INFO: Skipping dumping logs from cluster Dec 1 08:45:26.852: INFO: Skipping dumping logs from cluster Dec 1 08:45:26.898: INFO: Waiting for ready nodes 2, current ready 1, not ready nodes 0 Dec 1 08:45:46.943: INFO: Waiting for ready nodes 2, current ready 1, not ready nodes 0 Dec 1 08:46:06.987: INFO: Waiting for ready nodes 2, current ready 1, not ready nodes 0 Dec 1 08:46:27.032: INFO: Waiting for ready nodes 2, current ready 1, not ready nodes 0 Dec 1 08:46:47.076: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Remove taint from node ca-master 12/01/22 08:46:47.12 STEP: Remove taint from node ca-minion-group-5d60 12/01/22 08:46:47.163 STEP: Remove taint from node ca-minion-group-vlq2 12/01/22 08:46:47.206 I1201 08:46:47.249257 7918 cluster_size_autoscaling.go:165] Made nodes schedulable again in 129.247464ms [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 12/01/22 08:46:47.249 STEP: Collecting events from namespace "autoscaling-2417". 12/01/22 08:46:47.249 STEP: Found 55 events. 12/01/22 08:46:47.297 Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:50 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-xcllr Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:50 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-xqfck Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:50 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-ht6bv Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:50 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-q4slt Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:50 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-bq7fw Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:50 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-c4rb5 Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:50 +0000 UTC - event for memory-reservation-bq7fw: {default-scheduler } Scheduled: Successfully assigned autoscaling-2417/memory-reservation-bq7fw to ca-minion-group-rmxn Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:50 +0000 UTC - event for memory-reservation-c4rb5: {default-scheduler } Scheduled: Successfully assigned autoscaling-2417/memory-reservation-c4rb5 to ca-minion-group-sthz Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:50 +0000 UTC - event for memory-reservation-ht6bv: {default-scheduler } Scheduled: Successfully assigned autoscaling-2417/memory-reservation-ht6bv to ca-minion-group-l31t Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:50 +0000 UTC - event for memory-reservation-q4slt: {default-scheduler } Scheduled: Successfully assigned autoscaling-2417/memory-reservation-q4slt to ca-minion-group-1-6h5h Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:50 +0000 UTC - event for memory-reservation-xcllr: {default-scheduler } Scheduled: Successfully assigned autoscaling-2417/memory-reservation-xcllr to ca-minion-group-vlq2 Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:50 +0000 UTC - event for memory-reservation-xqfck: {default-scheduler } Scheduled: Successfully assigned autoscaling-2417/memory-reservation-xqfck to ca-minion-group-1-3m6g Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:51 +0000 UTC - event for memory-reservation-bq7fw: {kubelet ca-minion-group-rmxn} Started: Started container memory-reservation Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:51 +0000 UTC - event for memory-reservation-bq7fw: {kubelet ca-minion-group-rmxn} Created: Created container memory-reservation Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:51 +0000 UTC - event for memory-reservation-bq7fw: {kubelet ca-minion-group-rmxn} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:51 +0000 UTC - event for memory-reservation-c4rb5: {kubelet ca-minion-group-sthz} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:51 +0000 UTC - event for memory-reservation-c4rb5: {kubelet ca-minion-group-sthz} Started: Started container memory-reservation Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:51 +0000 UTC - event for memory-reservation-c4rb5: {kubelet ca-minion-group-sthz} Created: Created container memory-reservation Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:51 +0000 UTC - event for memory-reservation-ht6bv: {kubelet ca-minion-group-l31t} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-prntz" : failed to sync configmap cache: timed out waiting for the condition Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:51 +0000 UTC - event for memory-reservation-xcllr: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:51 +0000 UTC - event for memory-reservation-xcllr: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:51 +0000 UTC - event for memory-reservation-xcllr: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:51 +0000 UTC - event for memory-reservation-xqfck: {kubelet ca-minion-group-1-3m6g} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-8z2hs" : failed to sync configmap cache: timed out waiting for the condition Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:52 +0000 UTC - event for memory-reservation-ht6bv: {kubelet ca-minion-group-l31t} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:52 +0000 UTC - event for memory-reservation-ht6bv: {kubelet ca-minion-group-l31t} Created: Created container memory-reservation Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:52 +0000 UTC - event for memory-reservation-ht6bv: {kubelet ca-minion-group-l31t} Started: Started container memory-reservation Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:52 +0000 UTC - event for memory-reservation-q4slt: {kubelet ca-minion-group-1-6h5h} Started: Started container memory-reservation Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:52 +0000 UTC - event for memory-reservation-q4slt: {kubelet ca-minion-group-1-6h5h} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:52 +0000 UTC - event for memory-reservation-q4slt: {kubelet ca-minion-group-1-6h5h} Created: Created container memory-reservation Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:52 +0000 UTC - event for memory-reservation-xqfck: {kubelet ca-minion-group-1-3m6g} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:52 +0000 UTC - event for memory-reservation-xqfck: {kubelet ca-minion-group-1-3m6g} Created: Created container memory-reservation Dec 1 08:46:47.297: INFO: At 2022-12-01 08:24:52 +0000 UTC - event for memory-reservation-xqfck: {kubelet ca-minion-group-1-3m6g} Started: Started container memory-reservation Dec 1 08:46:47.297: INFO: At 2022-12-01 08:29:45 +0000 UTC - event for memory-reservation-c4rb5: {node-controller } NodeNotReady: Node is not ready Dec 1 08:46:47.297: INFO: At 2022-12-01 08:32:10 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-gwffn Dec 1 08:46:47.297: INFO: At 2022-12-01 08:32:10 +0000 UTC - event for memory-reservation-c4rb5: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod autoscaling-2417/memory-reservation-c4rb5 Dec 1 08:46:47.297: INFO: At 2022-12-01 08:32:10 +0000 UTC - event for memory-reservation-gwffn: {default-scheduler } FailedScheduling: 0/6 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 6 Insufficient memory. preemption: 0/6 nodes are available: 1 Preemption is not helpful for scheduling, 5 No preemption victims found for incoming pod.. Dec 1 08:46:47.297: INFO: At 2022-12-01 08:35:10 +0000 UTC - event for memory-reservation-bq7fw: {node-controller } NodeNotReady: Node is not ready Dec 1 08:46:47.297: INFO: At 2022-12-01 08:35:25 +0000 UTC - event for memory-reservation-xqfck: {node-controller } NodeNotReady: Node is not ready Dec 1 08:46:47.297: INFO: At 2022-12-01 08:35:31 +0000 UTC - event for memory-reservation-ht6bv: {node-controller } NodeNotReady: Node is not ready Dec 1 08:46:47.297: INFO: At 2022-12-01 08:35:31 +0000 UTC - event for memory-reservation-q4slt: {node-controller } NodeNotReady: Node is not ready Dec 1 08:46:47.297: INFO: At 2022-12-01 08:37:12 +0000 UTC - event for memory-reservation-gwffn: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 2 Insufficient memory. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.. Dec 1 08:46:47.297: INFO: At 2022-12-01 08:37:30 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-5sxcd Dec 1 08:46:47.297: INFO: At 2022-12-01 08:37:30 +0000 UTC - event for memory-reservation-5sxcd: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 2 Insufficient memory. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.. Dec 1 08:46:47.297: INFO: At 2022-12-01 08:37:30 +0000 UTC - event for memory-reservation-bq7fw: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod autoscaling-2417/memory-reservation-bq7fw Dec 1 08:46:47.297: INFO: At 2022-12-01 08:37:50 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-kqc8k Dec 1 08:46:47.297: INFO: At 2022-12-01 08:37:50 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: (combined from similar events): Created pod: memory-reservation-5p95q Dec 1 08:46:47.297: INFO: At 2022-12-01 08:37:50 +0000 UTC - event for memory-reservation-5p95q: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 2 Insufficient memory. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.. Dec 1 08:46:47.297: INFO: At 2022-12-01 08:37:50 +0000 UTC - event for memory-reservation-97rfn: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 2 Insufficient memory. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.. Dec 1 08:46:47.297: INFO: At 2022-12-01 08:37:50 +0000 UTC - event for memory-reservation-kqc8k: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 2 Insufficient memory. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.. Dec 1 08:46:47.297: INFO: At 2022-12-01 08:45:03 +0000 UTC - event for memory-reservation-5p95q: {default-scheduler } FailedScheduling: skip schedule deleting pod: autoscaling-2417/memory-reservation-5p95q Dec 1 08:46:47.297: INFO: At 2022-12-01 08:45:03 +0000 UTC - event for memory-reservation-5sxcd: {default-scheduler } FailedScheduling: skip schedule deleting pod: autoscaling-2417/memory-reservation-5sxcd Dec 1 08:46:47.297: INFO: At 2022-12-01 08:45:03 +0000 UTC - event for memory-reservation-97rfn: {default-scheduler } FailedScheduling: skip schedule deleting pod: autoscaling-2417/memory-reservation-97rfn Dec 1 08:46:47.297: INFO: At 2022-12-01 08:45:03 +0000 UTC - event for memory-reservation-gwffn: {default-scheduler } FailedScheduling: skip schedule deleting pod: autoscaling-2417/memory-reservation-gwffn Dec 1 08:46:47.297: INFO: At 2022-12-01 08:45:03 +0000 UTC - event for memory-reservation-kqc8k: {default-scheduler } FailedScheduling: skip schedule deleting pod: autoscaling-2417/memory-reservation-kqc8k Dec 1 08:46:47.297: INFO: At 2022-12-01 08:45:03 +0000 UTC - event for memory-reservation-xcllr: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 08:46:47.339: INFO: POD NODE PHASE GRACE CONDITIONS Dec 1 08:46:47.339: INFO: Dec 1 08:46:47.382: INFO: Logging node info for node ca-master Dec 1 08:46:47.424: INFO: Node Info: &Node{ObjectMeta:{ca-master a2126acf-72e0-4c73-a9ef-ce1238132582 50584 0 2022-12-01 04:35:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 04:35:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-12-01 04:35:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-12-01 04:35:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-12-01 08:45:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 04:35:50 +0000 UTC,LastTransitionTime:2022-12-01 04:35:50 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 08:45:52 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 08:45:52 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 08:45:52 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 08:45:52 +0000 UTC,LastTransitionTime:2022-12-01 04:35:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.105.118.216,},NodeAddress{Type:InternalDNS,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bac070a384fefb9133ee7878e15673cf,SystemUUID:bac070a3-84fe-fb91-33ee-7878e15673cf,BootID:39b8786f-3724-43ea-9f9b-9333f7876ff8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/autoscaling/cluster-autoscaler@sha256:07ab8c89cd0ad296ddb6347febe196d8fe0e1c757656a98f71199860d87cf1a5 registry.k8s.io/autoscaling/cluster-autoscaler:v1.22.0],SizeBytes:24220268,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 08:46:47.424: INFO: Logging kubelet events for node ca-master Dec 1 08:46:47.470: INFO: Logging pods the kubelet thinks is on node ca-master Dec 1 08:46:47.534: INFO: cluster-autoscaler-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 08:46:47.534: INFO: Container cluster-autoscaler ready: true, restart count 2 Dec 1 08:46:47.534: INFO: l7-lb-controller-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 08:46:47.534: INFO: Container l7-lb-controller ready: true, restart count 2 Dec 1 08:46:47.534: INFO: kube-apiserver-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 08:46:47.534: INFO: Container kube-apiserver ready: true, restart count 0 Dec 1 08:46:47.534: INFO: kube-controller-manager-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 08:46:47.535: INFO: Container kube-controller-manager ready: true, restart count 1 Dec 1 08:46:47.535: INFO: etcd-server-events-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 08:46:47.535: INFO: Container etcd-container ready: true, restart count 0 Dec 1 08:46:47.535: INFO: etcd-server-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 08:46:47.535: INFO: Container etcd-container ready: true, restart count 0 Dec 1 08:46:47.535: INFO: kube-addon-manager-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 08:46:47.535: INFO: Container kube-addon-manager ready: true, restart count 0 Dec 1 08:46:47.535: INFO: konnectivity-server-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 08:46:47.535: INFO: Container konnectivity-server-container ready: true, restart count 0 Dec 1 08:46:47.535: INFO: kube-scheduler-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 08:46:47.535: INFO: Container kube-scheduler ready: true, restart count 0 Dec 1 08:46:47.535: INFO: metadata-proxy-v0.1-4rrgr started at 2022-12-01 04:35:35 +0000 UTC (0+2 container statuses recorded) Dec 1 08:46:47.535: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 08:46:47.535: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 08:46:47.733: INFO: Latency metrics for node ca-master Dec 1 08:46:47.733: INFO: Logging node info for node ca-minion-group-5d60 Dec 1 08:46:47.777: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-5d60 f80d0de1-9365-4420-aee0-0c1cfce42804 50740 0 2022-12-01 08:46:31 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-5d60 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 08:46:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-12-01 08:46:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-12-01 08:46:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-12-01 08:46:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.50.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-12-01 08:46:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.50.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-5d60,Unschedulable:false,Taints:[]Taint{Taint{Key:DeletionCandidateOfClusterAutoscaler,Value:1669884401,Effect:PreferNoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.50.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-12-01 08:46:37 +0000 UTC,LastTransitionTime:2022-12-01 08:46:36 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-12-01 08:46:37 +0000 UTC,LastTransitionTime:2022-12-01 08:46:36 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-12-01 08:46:37 +0000 UTC,LastTransitionTime:2022-12-01 08:46:36 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-12-01 08:46:37 +0000 UTC,LastTransitionTime:2022-12-01 08:46:36 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-12-01 08:46:37 +0000 UTC,LastTransitionTime:2022-12-01 08:46:36 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-12-01 08:46:37 +0000 UTC,LastTransitionTime:2022-12-01 08:46:36 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-12-01 08:46:37 +0000 UTC,LastTransitionTime:2022-12-01 08:46:36 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 08:46:43 +0000 UTC,LastTransitionTime:2022-12-01 08:46:43 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 08:46:32 +0000 UTC,LastTransitionTime:2022-12-01 08:46:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 08:46:32 +0000 UTC,LastTransitionTime:2022-12-01 08:46:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 08:46:32 +0000 UTC,LastTransitionTime:2022-12-01 08:46:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 08:46:32 +0000 UTC,LastTransitionTime:2022-12-01 08:46:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.53,},NodeAddress{Type:ExternalIP,Address:34.168.16.21,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-5d60.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-5d60.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9399bb068198cf3426a539ca2a95d9d5,SystemUUID:9399bb06-8198-cf34-26a5-39ca2a95d9d5,BootID:10ecf13a-984e-4b0a-b378-2dcbae80d983,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:67201736,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 08:46:47.777: INFO: Logging kubelet events for node ca-minion-group-5d60 Dec 1 08:46:47.824: INFO: Logging pods the kubelet thinks is on node ca-minion-group-5d60 Dec 1 08:46:47.887: INFO: kube-proxy-ca-minion-group-5d60 started at 2022-12-01 08:46:31 +0000 UTC (0+1 container statuses recorded) Dec 1 08:46:47.887: INFO: Container kube-proxy ready: true, restart count 0 Dec 1 08:46:47.887: INFO: metadata-proxy-v0.1-stcdh started at 2022-12-01 08:46:32 +0000 UTC (0+2 container statuses recorded) Dec 1 08:46:47.887: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 08:46:47.887: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 08:46:47.887: INFO: konnectivity-agent-w5jt8 started at 2022-12-01 08:46:43 +0000 UTC (0+1 container statuses recorded) Dec 1 08:46:47.887: INFO: Container konnectivity-agent ready: true, restart count 0 Dec 1 08:46:48.044: INFO: Latency metrics for node ca-minion-group-5d60 Dec 1 08:46:48.044: INFO: Logging node info for node ca-minion-group-vlq2 Dec 1 08:46:48.087: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-vlq2 132befc3-8b36-49c3-8aee-8af679afd99a 50645 0 2022-12-01 05:20:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-vlq2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 05:20:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-01 05:21:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.14.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-12-01 05:21:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-12-01 08:43:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-12-01 08:46:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.14.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-vlq2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.14.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-12-01 08:46:16 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-12-01 08:46:16 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-12-01 08:46:16 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-12-01 08:46:16 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-12-01 08:46:16 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-12-01 08:46:16 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-12-01 08:46:16 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 05:21:02 +0000 UTC,LastTransitionTime:2022-12-01 05:21:02 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 08:43:25 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 08:43:25 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 08:43:25 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 08:43:25 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.16,},NodeAddress{Type:ExternalIP,Address:35.227.188.214,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-vlq2.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-vlq2.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:240e9092aae0ae79fa5461368e619ce5,SystemUUID:240e9092-aae0-ae79-fa54-61368e619ce5,BootID:02338211-5cf4-4ba4-bf8a-82e73c605696,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 08:46:48.087: INFO: Logging kubelet events for node ca-minion-group-vlq2 Dec 1 08:46:48.133: INFO: Logging pods the kubelet thinks is on node ca-minion-group-vlq2 Dec 1 08:46:48.197: INFO: konnectivity-agent-x9vdq started at 2022-12-01 05:21:02 +0000 UTC (0+1 container statuses recorded) Dec 1 08:46:48.197: INFO: Container konnectivity-agent ready: true, restart count 0 Dec 1 08:46:48.197: INFO: volume-snapshot-controller-0 started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 08:46:48.197: INFO: Container volume-snapshot-controller ready: true, restart count 0 Dec 1 08:46:48.197: INFO: metadata-proxy-v0.1-mvw84 started at 2022-12-01 05:20:52 +0000 UTC (0+2 container statuses recorded) Dec 1 08:46:48.197: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 08:46:48.197: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 08:46:48.197: INFO: l7-default-backend-8549d69d99-n8nmc started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 08:46:48.197: INFO: Container default-http-backend ready: true, restart count 0 Dec 1 08:46:48.197: INFO: metrics-server-v0.5.2-867b8754b9-pmk4k started at 2022-12-01 05:30:20 +0000 UTC (0+2 container statuses recorded) Dec 1 08:46:48.197: INFO: Container metrics-server ready: true, restart count 1 Dec 1 08:46:48.197: INFO: Container metrics-server-nanny ready: true, restart count 0 Dec 1 08:46:48.197: INFO: kube-proxy-ca-minion-group-vlq2 started at 2022-12-01 05:20:51 +0000 UTC (0+1 container statuses recorded) Dec 1 08:46:48.197: INFO: Container kube-proxy ready: true, restart count 0 Dec 1 08:46:48.197: INFO: coredns-6d97d5ddb-gpg9p started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 08:46:48.197: INFO: Container coredns ready: true, restart count 0 Dec 1 08:46:48.389: INFO: Latency metrics for node ca-minion-group-vlq2 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "autoscaling-2417" for this suite. 12/01/22 08:46:48.389
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshouldn\'t\sbe\sable\sto\sscale\sdown\swhen\srescheduling\sa\spod\sis\srequired\,\sbut\spdb\sdoesn\'t\sallow\sdrain\[Feature\:ClusterSizeAutoscalingScaleDown\]$'
test/e2e/autoscaling/cluster_size_autoscaling.go:733 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:733 +0xb3 k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 +0x842 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 +0x54from junit_01.xml
[BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 12/01/22 07:57:50.249 Dec 1 07:57:50.250: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename autoscaling 12/01/22 07:57:50.251 STEP: Waiting for a default service account to be provisioned in namespace 12/01/22 07:57:50.382 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 12/01/22 07:57:50.463 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:103 STEP: Initial size of ca-minion-group-1: 0 12/01/22 07:57:53.879 STEP: Initial size of ca-minion-group: 2 12/01/22 07:57:57.166 Dec 1 07:57:57.212: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Initial number of schedulable nodes: 2 12/01/22 07:57:57.257 [It] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] test/e2e/autoscaling/cluster_size_autoscaling.go:727 STEP: Manually increase cluster size 12/01/22 07:57:57.257 STEP: Setting size of ca-minion-group to 4 12/01/22 07:58:00.592 Dec 1 07:58:00.592: INFO: Skipping dumping logs from cluster Dec 1 07:58:05.155: INFO: Skipping dumping logs from cluster STEP: Setting size of ca-minion-group-1 to 2 12/01/22 07:58:09.011 Dec 1 07:58:09.011: INFO: Skipping dumping logs from cluster Dec 1 07:58:13.366: INFO: Skipping dumping logs from cluster STEP: Setting size of ca-minion-group-1 to 2 12/01/22 07:58:16.606 Dec 1 07:58:16.606: INFO: Skipping dumping logs from cluster Dec 1 07:58:21.800: INFO: Skipping dumping logs from cluster W1201 07:58:25.272607 7918 cluster_size_autoscaling.go:1758] Unexpected node group size while waiting for cluster resize. Setting size to target again. I1201 07:58:25.272640 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1201 07:58:52.873684 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1201 07:59:19.613772 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 0 I1201 07:59:39.662332 7918 cluster_size_autoscaling.go:1381] Cluster has reached the desired size STEP: Run a pod on each node 12/01/22 07:59:39.709 STEP: Taint node ca-minion-group-1-60fb 12/01/22 07:59:39.709 STEP: Taint node ca-minion-group-1-nnss 12/01/22 07:59:39.805 STEP: Taint node ca-minion-group-gxhb 12/01/22 07:59:39.9 STEP: Taint node ca-minion-group-jdlw 12/01/22 07:59:39.993 STEP: Taint node ca-minion-group-sthz 12/01/22 07:59:40.101 STEP: Taint node ca-minion-group-vlq2 12/01/22 07:59:40.195 STEP: creating replication controller reschedulable-pods in namespace autoscaling-1091 12/01/22 07:59:40.294 I1201 07:59:40.343120 7918 runners.go:193] Created replication controller with name: reschedulable-pods, namespace: autoscaling-1091, replica count: 0 STEP: Remove taint from node ca-minion-group-1-60fb 12/01/22 07:59:40.435 STEP: Taint node ca-minion-group-1-60fb 12/01/22 07:59:45.671 STEP: Remove taint from node ca-minion-group-1-nnss 12/01/22 07:59:45.768 STEP: Taint node ca-minion-group-1-nnss 12/01/22 07:59:51.008 STEP: Remove taint from node ca-minion-group-gxhb 12/01/22 07:59:51.107 STEP: Taint node ca-minion-group-gxhb 12/01/22 07:59:56.352 STEP: Remove taint from node ca-minion-group-jdlw 12/01/22 07:59:56.454 STEP: Taint node ca-minion-group-jdlw 12/01/22 08:00:01.732 STEP: Remove taint from node ca-minion-group-sthz 12/01/22 08:00:01.839 STEP: Taint node ca-minion-group-sthz 12/01/22 08:00:07.102 STEP: Remove taint from node ca-minion-group-vlq2 12/01/22 08:00:07.204 STEP: Taint node ca-minion-group-vlq2 12/01/22 08:00:12.441 STEP: Remove taint from node ca-minion-group-vlq2 12/01/22 08:00:12.542 STEP: Remove taint from node ca-minion-group-sthz 12/01/22 08:00:12.648 STEP: Remove taint from node ca-minion-group-jdlw 12/01/22 08:00:12.748 STEP: Remove taint from node ca-minion-group-gxhb 12/01/22 08:00:12.845 STEP: Remove taint from node ca-minion-group-1-nnss 12/01/22 08:00:12.945 STEP: Remove taint from node ca-minion-group-1-60fb 12/01/22 08:00:13.044 STEP: Create a PodDisruptionBudget 12/01/22 08:00:13.144 STEP: No nodes should be removed 12/01/22 08:00:13.192 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m7.009s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 5m0.001s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 2m44.066s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 3 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m27.011s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 5m20.003s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 3m4.069s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 3 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m47.013s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 5m40.005s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 3m24.07s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 4 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m7.014s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 6m0.006s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 3m44.072s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 4 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m27.016s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 6m20.008s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 4m4.074s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 4 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m47.018s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 6m40.01s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 4m24.076s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 5 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m7.02s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 7m0.012s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 4m44.078s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 5 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m27.022s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 7m20.014s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 5m4.079s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 5 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m47.023s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 7m40.015s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 5m24.081s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 6 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m7.024s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 8m0.016s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 5m44.082s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 6 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m27.026s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 8m20.018s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 6m4.084s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 6 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m47.027s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 8m40.019s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 6m24.085s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 7 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m7.028s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 9m0.02s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 6m44.085s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 7 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m27.029s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 9m20.021s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 7m4.086s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 7 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m47.029s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 9m40.021s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 7m24.087s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 8 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m7.03s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 10m0.022s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 7m44.088s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 8 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m27.032s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 10m20.024s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 8m4.09s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 8 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m47.033s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 10m40.025s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 8m24.091s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 9 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m7.034s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 11m0.026s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 8m44.092s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 9 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m27.035s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 11m20.027s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 9m4.093s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 9 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m47.036s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 11m40.028s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 9m24.094s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 10 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m7.038s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 12m0.03s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 9m44.096s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 10 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m27.039s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 12m20.031s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 10m4.097s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 10 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m47.04s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 12m40.032s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 10m24.098s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 11 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m7.041s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 13m0.033s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 10m44.099s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 11 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m27.042s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 13m20.034s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 11m4.1s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 11 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m47.043s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 13m40.035s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 11m24.101s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 12 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m7.045s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 14m0.037s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 11m44.102s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 12 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m27.046s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 14m20.038s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 12m4.103s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 12 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m47.047s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 14m40.039s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 12m24.105s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 13 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m7.049s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 15m0.041s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 12m44.106s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 13 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m27.05s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 15m20.042s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 13m4.108s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 13 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m47.051s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 15m40.043s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 13m24.109s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 14 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m7.052s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 16m0.044s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 13m44.11s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 14 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m27.053s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 16m20.045s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 14m4.11s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 14 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m47.054s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 16m40.046s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 14m24.112s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 15 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m7.056s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 17m0.048s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 14m44.113s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 15 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m27.057s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 17m20.049s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 15m4.115s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 15 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m47.058s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 17m40.05s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 15m24.115s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 16 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m7.059s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 18m0.051s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 15m44.117s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 16 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m27.061s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 18m20.053s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 16m4.118s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 16 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m47.062s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 18m40.054s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 16m24.119s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 17 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m7.063s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 19m0.055s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 16m44.12s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 17 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m27.064s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 19m20.056s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 17m4.122s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 17 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m47.065s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 19m40.057s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 17m24.123s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 18 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 20m7.066s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 20m0.058s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 17m44.124s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 18 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 20m27.067s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 20m20.059s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 18m4.125s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 18 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 20m47.069s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 20m40.061s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 18m24.126s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 19 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 21m7.07s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 21m0.062s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 18m44.128s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 19 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 21m27.071s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 21m20.063s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 19m4.129s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 19 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 21m47.072s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 21m40.064s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 19m24.13s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 20 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 22m7.073s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 In [It] (Node Runtime: 22m0.065s) test/e2e/autoscaling/cluster_size_autoscaling.go:727 At [By Step] No nodes should be removed (Step Runtime: 19m44.131s) test/e2e/autoscaling/cluster_size_autoscaling.go:729 Spec Goroutine goroutine 10539 [sleep, 20 minutes] time.Sleep(0x1176592e000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:730 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003e2c300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Dec 1 08:20:13.396: FAIL: Expected <int>: 5 to equal <int>: 6 Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25.1(0x0?) test/e2e/autoscaling/cluster_size_autoscaling.go:733 +0xb3 k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc000d12b40, 0x7fa3ee0?, {0xc003d4b2d0, 0x10}, 0x1, 0x0, 0xc00405bf58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 +0x842 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.25() test/e2e/autoscaling/cluster_size_autoscaling.go:728 +0x54 STEP: deleting ReplicationController reschedulable-pods in namespace autoscaling-1091, will wait for the garbage collector to delete the pods 12/01/22 08:20:13.439 Dec 1 08:20:13.576: INFO: Deleting ReplicationController reschedulable-pods took: 44.433489ms Dec 1 08:20:13.677: INFO: Terminating ReplicationController reschedulable-pods pods took: 100.679272ms [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/node/init/init.go:32 Dec 1 08:20:14.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:139 STEP: Restoring initial size of the cluster 12/01/22 08:20:15.036 STEP: Setting size of ca-minion-group to 2 12/01/22 08:20:19.634 Dec 1 08:20:19.634: INFO: Skipping dumping logs from cluster Dec 1 08:20:24.446: INFO: Skipping dumping logs from cluster STEP: Setting size of ca-minion-group-1 to 0 12/01/22 08:20:27.946 Dec 1 08:20:27.946: INFO: Skipping dumping logs from cluster Dec 1 08:20:32.454: INFO: Skipping dumping logs from cluster Dec 1 08:20:32.505: INFO: Waiting for ready nodes 2, current ready 6, not ready nodes 0 Dec 1 08:20:52.556: INFO: Waiting for ready nodes 2, current ready 6, not ready nodes 0 Dec 1 08:21:12.608: INFO: Condition Ready of node ca-minion-group-1-60fb is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:21:12.608: INFO: Condition Ready of node ca-minion-group-gxhb is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669881898 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 08:20:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 08:21:05 +0000 UTC}]. Failure Dec 1 08:21:12.608: INFO: Condition Ready of node ca-minion-group-jdlw is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:21:12.608: INFO: Waiting for ready nodes 2, current ready 3, not ready nodes 3 Dec 1 08:21:32.657: INFO: Condition Ready of node ca-minion-group-1-60fb is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:21:32.657: INFO: Condition Ready of node ca-minion-group-1-nnss is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:21:32.657: INFO: Condition Ready of node ca-minion-group-gxhb is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669881898 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 08:20:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 08:21:05 +0000 UTC}]. Failure Dec 1 08:21:32.657: INFO: Condition Ready of node ca-minion-group-jdlw is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669881797 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 08:21:00 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 08:21:15 +0000 UTC}]. Failure Dec 1 08:21:32.657: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 4 Dec 1 08:21:52.708: INFO: Condition Ready of node ca-minion-group-1-60fb is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:21:52.708: INFO: Condition Ready of node ca-minion-group-1-nnss is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:21:52.708: INFO: Condition Ready of node ca-minion-group-gxhb is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669881898 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 08:20:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 08:21:05 +0000 UTC}]. Failure Dec 1 08:21:52.708: INFO: Condition Ready of node ca-minion-group-jdlw is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669881797 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 08:21:00 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 08:21:15 +0000 UTC}]. Failure Dec 1 08:21:52.708: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 4 Dec 1 08:22:12.760: INFO: Condition Ready of node ca-minion-group-1-60fb is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:22:12.760: INFO: Condition Ready of node ca-minion-group-1-nnss is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:22:12.760: INFO: Condition Ready of node ca-minion-group-gxhb is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669881898 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 08:20:59 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 08:21:05 +0000 UTC}]. Failure Dec 1 08:22:12.760: INFO: Condition Ready of node ca-minion-group-jdlw is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669881797 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 08:21:00 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 08:21:15 +0000 UTC}]. Failure Dec 1 08:22:12.760: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 4 Dec 1 08:22:32.809: INFO: Condition Ready of node ca-minion-group-1-nnss is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 08:22:32.809: INFO: Condition Ready of node ca-minion-group-jdlw is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669881797 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 08:21:00 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 08:21:15 +0000 UTC}]. Failure Dec 1 08:22:32.809: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 2 Dec 1 08:22:52.854: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Remove taint from node ca-master 12/01/22 08:22:52.898 STEP: Remove taint from node ca-minion-group-sthz 12/01/22 08:22:52.942 STEP: Remove taint from node ca-minion-group-vlq2 12/01/22 08:22:52.984 I1201 08:22:53.025766 7918 cluster_size_autoscaling.go:165] Made nodes schedulable again in 126.850593ms [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 12/01/22 08:22:53.025 STEP: Collecting events from namespace "autoscaling-1091". 12/01/22 08:22:53.026 STEP: Found 40 events. 12/01/22 08:22:53.07 Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:40 +0000 UTC - event for reschedulable-pods: {replication-controller } SuccessfulCreate: Created pod: reschedulable-pods-79hn7 Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:40 +0000 UTC - event for reschedulable-pods-79hn7: {default-scheduler } Scheduled: Successfully assigned autoscaling-1091/reschedulable-pods-79hn7 to ca-minion-group-1-60fb Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:41 +0000 UTC - event for reschedulable-pods-79hn7: {kubelet ca-minion-group-1-60fb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:41 +0000 UTC - event for reschedulable-pods-79hn7: {kubelet ca-minion-group-1-60fb} Created: Created container reschedulable-pods Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:41 +0000 UTC - event for reschedulable-pods-79hn7: {kubelet ca-minion-group-1-60fb} Started: Started container reschedulable-pods Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:45 +0000 UTC - event for reschedulable-pods: {replication-controller } SuccessfulCreate: Created pod: reschedulable-pods-txmnf Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:45 +0000 UTC - event for reschedulable-pods-txmnf: {default-scheduler } Scheduled: Successfully assigned autoscaling-1091/reschedulable-pods-txmnf to ca-minion-group-1-nnss Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:46 +0000 UTC - event for reschedulable-pods-txmnf: {kubelet ca-minion-group-1-nnss} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:46 +0000 UTC - event for reschedulable-pods-txmnf: {kubelet ca-minion-group-1-nnss} Created: Created container reschedulable-pods Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:46 +0000 UTC - event for reschedulable-pods-txmnf: {kubelet ca-minion-group-1-nnss} Started: Started container reschedulable-pods Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:51 +0000 UTC - event for reschedulable-pods: {replication-controller } SuccessfulCreate: Created pod: reschedulable-pods-g8dff Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:51 +0000 UTC - event for reschedulable-pods-g8dff: {default-scheduler } Scheduled: Successfully assigned autoscaling-1091/reschedulable-pods-g8dff to ca-minion-group-gxhb Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:52 +0000 UTC - event for reschedulable-pods-g8dff: {kubelet ca-minion-group-gxhb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:52 +0000 UTC - event for reschedulable-pods-g8dff: {kubelet ca-minion-group-gxhb} Created: Created container reschedulable-pods Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:52 +0000 UTC - event for reschedulable-pods-g8dff: {kubelet ca-minion-group-gxhb} Started: Started container reschedulable-pods Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:56 +0000 UTC - event for reschedulable-pods: {replication-controller } SuccessfulCreate: Created pod: reschedulable-pods-5t5mh Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:56 +0000 UTC - event for reschedulable-pods-5t5mh: {default-scheduler } Scheduled: Successfully assigned autoscaling-1091/reschedulable-pods-5t5mh to ca-minion-group-jdlw Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:57 +0000 UTC - event for reschedulable-pods-5t5mh: {kubelet ca-minion-group-jdlw} Started: Started container reschedulable-pods Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:57 +0000 UTC - event for reschedulable-pods-5t5mh: {kubelet ca-minion-group-jdlw} Created: Created container reschedulable-pods Dec 1 08:22:53.070: INFO: At 2022-12-01 07:59:57 +0000 UTC - event for reschedulable-pods-5t5mh: {kubelet ca-minion-group-jdlw} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 08:22:53.070: INFO: At 2022-12-01 08:00:02 +0000 UTC - event for reschedulable-pods: {replication-controller } SuccessfulCreate: Created pod: reschedulable-pods-qvgb7 Dec 1 08:22:53.070: INFO: At 2022-12-01 08:00:02 +0000 UTC - event for reschedulable-pods-qvgb7: {default-scheduler } Scheduled: Successfully assigned autoscaling-1091/reschedulable-pods-qvgb7 to ca-minion-group-sthz Dec 1 08:22:53.070: INFO: At 2022-12-01 08:00:02 +0000 UTC - event for reschedulable-pods-qvgb7: {kubelet ca-minion-group-sthz} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 08:22:53.070: INFO: At 2022-12-01 08:00:02 +0000 UTC - event for reschedulable-pods-qvgb7: {kubelet ca-minion-group-sthz} Created: Created container reschedulable-pods Dec 1 08:22:53.070: INFO: At 2022-12-01 08:00:03 +0000 UTC - event for reschedulable-pods-qvgb7: {kubelet ca-minion-group-sthz} Started: Started container reschedulable-pods Dec 1 08:22:53.070: INFO: At 2022-12-01 08:00:07 +0000 UTC - event for reschedulable-pods: {replication-controller } SuccessfulCreate: Created pod: reschedulable-pods-r5tm7 Dec 1 08:22:53.070: INFO: At 2022-12-01 08:00:07 +0000 UTC - event for reschedulable-pods-r5tm7: {default-scheduler } Scheduled: Successfully assigned autoscaling-1091/reschedulable-pods-r5tm7 to ca-minion-group-vlq2 Dec 1 08:22:53.070: INFO: At 2022-12-01 08:00:08 +0000 UTC - event for reschedulable-pods-r5tm7: {kubelet ca-minion-group-vlq2} Started: Started container reschedulable-pods Dec 1 08:22:53.070: INFO: At 2022-12-01 08:00:08 +0000 UTC - event for reschedulable-pods-r5tm7: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 08:22:53.070: INFO: At 2022-12-01 08:00:08 +0000 UTC - event for reschedulable-pods-r5tm7: {kubelet ca-minion-group-vlq2} Created: Created container reschedulable-pods Dec 1 08:22:53.070: INFO: At 2022-12-01 08:09:21 +0000 UTC - event for reschedulable-pods-5t5mh: {cluster-autoscaler } ScaleDown: deleting pod for node scale down Dec 1 08:22:53.070: INFO: At 2022-12-01 08:14:54 +0000 UTC - event for reschedulable-pods-txmnf: {cluster-autoscaler } ScaleDown: deleting pod for node scale down Dec 1 08:22:53.070: INFO: At 2022-12-01 08:16:55 +0000 UTC - event for reschedulable-pods-qvgb7: {cluster-autoscaler } ScaleDown: deleting pod for node scale down Dec 1 08:22:53.070: INFO: At 2022-12-01 08:19:36 +0000 UTC - event for reschedulable-pods-79hn7: {cluster-autoscaler } ScaleDown: deleting pod for node scale down Dec 1 08:22:53.070: INFO: At 2022-12-01 08:20:13 +0000 UTC - event for reschedulable-pods-5t5mh: {kubelet ca-minion-group-jdlw} Killing: Stopping container reschedulable-pods Dec 1 08:22:53.070: INFO: At 2022-12-01 08:20:13 +0000 UTC - event for reschedulable-pods-79hn7: {kubelet ca-minion-group-1-60fb} Killing: Stopping container reschedulable-pods Dec 1 08:22:53.070: INFO: At 2022-12-01 08:20:13 +0000 UTC - event for reschedulable-pods-g8dff: {kubelet ca-minion-group-gxhb} Killing: Stopping container reschedulable-pods Dec 1 08:22:53.070: INFO: At 2022-12-01 08:20:13 +0000 UTC - event for reschedulable-pods-qvgb7: {kubelet ca-minion-group-sthz} Killing: Stopping container reschedulable-pods Dec 1 08:22:53.070: INFO: At 2022-12-01 08:20:13 +0000 UTC - event for reschedulable-pods-r5tm7: {kubelet ca-minion-group-vlq2} Killing: Stopping container reschedulable-pods Dec 1 08:22:53.070: INFO: At 2022-12-01 08:20:13 +0000 UTC - event for reschedulable-pods-txmnf: {kubelet ca-minion-group-1-nnss} Killing: Stopping container reschedulable-pods Dec 1 08:22:53.112: INFO: POD NODE PHASE GRACE CONDITIONS Dec 1 08:22:53.112: INFO: Dec 1 08:22:53.156: INFO: Logging node info for node ca-master Dec 1 08:22:53.199: INFO: Node Info: &Node{ObjectMeta:{ca-master a2126acf-72e0-4c73-a9ef-ce1238132582 45995 0 2022-12-01 04:35:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 04:35:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-12-01 04:35:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-12-01 04:35:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-12-01 08:20:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 04:35:50 +0000 UTC,LastTransitionTime:2022-12-01 04:35:50 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 08:20:21 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 08:20:21 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 08:20:21 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 08:20:21 +0000 UTC,LastTransitionTime:2022-12-01 04:35:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.105.118.216,},NodeAddress{Type:InternalDNS,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bac070a384fefb9133ee7878e15673cf,SystemUUID:bac070a3-84fe-fb91-33ee-7878e15673cf,BootID:39b8786f-3724-43ea-9f9b-9333f7876ff8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/autoscaling/cluster-autoscaler@sha256:07ab8c89cd0ad296ddb6347febe196d8fe0e1c757656a98f71199860d87cf1a5 registry.k8s.io/autoscaling/cluster-autoscaler:v1.22.0],SizeBytes:24220268,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 08:22:53.200: INFO: Logging kubelet events for node ca-master Dec 1 08:22:53.245: INFO: Logging pods the kubelet thinks is on node ca-master Dec 1 08:22:53.311: INFO: kube-addon-manager-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 08:22:53.311: INFO: Container kube-addon-manager ready: true, restart count 0 Dec 1 08:22:53.311: INFO: cluster-autoscaler-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 08:22:53.311: INFO: Container cluster-autoscaler ready: true, restart count 2 Dec 1 08:22:53.311: INFO: l7-lb-controller-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 08:22:53.311: INFO: Container l7-lb-controller ready: true, restart count 2 Dec 1 08:22:53.311: INFO: kube-apiserver-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 08:22:53.311: INFO: Container kube-apiserver ready: true, restart count 0 Dec 1 08:22:53.311: INFO: kube-controller-manager-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 08:22:53.311: INFO: Container kube-controller-manager ready: true, restart count 1 Dec 1 08:22:53.311: INFO: etcd-server-events-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 08:22:53.311: INFO: Container etcd-container ready: true, restart count 0 Dec 1 08:22:53.311: INFO: etcd-server-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 08:22:53.311: INFO: Container etcd-container ready: true, restart count 0 Dec 1 08:22:53.311: INFO: konnectivity-server-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 08:22:53.311: INFO: Container konnectivity-server-container ready: true, restart count 0 Dec 1 08:22:53.311: INFO: kube-scheduler-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 08:22:53.311: INFO: Container kube-scheduler ready: true, restart count 0 Dec 1 08:22:53.311: INFO: metadata-proxy-v0.1-4rrgr started at 2022-12-01 04:35:35 +0000 UTC (0+2 container statuses recorded) Dec 1 08:22:53.311: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 08:22:53.311: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 08:22:53.565: INFO: Latency metrics for node ca-master Dec 1 08:22:53.565: INFO: Logging node info for node ca-minion-group-sthz Dec 1 08:22:53.609: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-sthz a70ea21d-aaa1-41b2-b831-9f921c7b7f62 46416 0 2022-12-01 07:12:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-sthz kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 07:12:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-01 07:12:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.36.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-12-01 07:12:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {cluster-autoscaler Update v1 2022-12-01 08:18:55 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}} } {kubelet Update v1 2022-12-01 08:19:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-12-01 08:22:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.36.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-sthz,Unschedulable:false,Taints:[]Taint{Taint{Key:DeletionCandidateOfClusterAutoscaler,Value:1669881908,Effect:PreferNoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.36.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-12-01 08:22:45 +0000 UTC,LastTransitionTime:2022-12-01 07:12:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-12-01 08:22:45 +0000 UTC,LastTransitionTime:2022-12-01 07:12:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-12-01 08:22:45 +0000 UTC,LastTransitionTime:2022-12-01 07:12:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-12-01 08:22:45 +0000 UTC,LastTransitionTime:2022-12-01 07:12:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-12-01 08:22:45 +0000 UTC,LastTransitionTime:2022-12-01 07:12:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-12-01 08:22:45 +0000 UTC,LastTransitionTime:2022-12-01 07:12:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-12-01 08:22:45 +0000 UTC,LastTransitionTime:2022-12-01 07:12:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 07:12:41 +0000 UTC,LastTransitionTime:2022-12-01 07:12:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 08:19:19 +0000 UTC,LastTransitionTime:2022-12-01 07:12:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 08:19:19 +0000 UTC,LastTransitionTime:2022-12-01 07:12:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 08:19:19 +0000 UTC,LastTransitionTime:2022-12-01 07:12:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 08:19:19 +0000 UTC,LastTransitionTime:2022-12-01 07:12:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.39,},NodeAddress{Type:ExternalIP,Address:34.82.83.63,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-sthz.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-sthz.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:481c4117bb922808349e701d0755d1ca,SystemUUID:481c4117-bb92-2808-349e-701d0755d1ca,BootID:f1ec10da-b2bc-4414-99da-381625f7296c,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 08:22:53.609: INFO: Logging kubelet events for node ca-minion-group-sthz Dec 1 08:22:53.656: INFO: Logging pods the kubelet thinks is on node ca-minion-group-sthz Dec 1 08:22:53.721: INFO: kube-proxy-ca-minion-group-sthz started at 2022-12-01 07:12:29 +0000 UTC (0+1 container statuses recorded) Dec 1 08:22:53.721: INFO: Container kube-proxy ready: true, restart count 0 Dec 1 08:22:53.721: INFO: metadata-proxy-v0.1-ds8th started at 2022-12-01 08:17:26 +0000 UTC (0+2 container statuses recorded) Dec 1 08:22:53.721: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 08:22:53.721: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 08:22:53.721: INFO: konnectivity-agent-pb76f started at 2022-12-01 08:18:55 +0000 UTC (0+1 container statuses recorded) Dec 1 08:22:53.721: INFO: Container konnectivity-agent ready: true, restart count 0 Dec 1 08:22:53.895: INFO: Latency metrics for node ca-minion-group-sthz Dec 1 08:22:53.895: INFO: Logging node info for node ca-minion-group-vlq2 Dec 1 08:22:53.938: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-vlq2 132befc3-8b36-49c3-8aee-8af679afd99a 46154 0 2022-12-01 05:20:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-vlq2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 05:20:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-01 05:21:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.14.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-12-01 05:21:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-12-01 08:17:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-12-01 08:21:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.14.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-vlq2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.14.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-12-01 08:21:14 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-12-01 08:21:14 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-12-01 08:21:14 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-12-01 08:21:14 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-12-01 08:21:14 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-12-01 08:21:14 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-12-01 08:21:14 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 05:21:02 +0000 UTC,LastTransitionTime:2022-12-01 05:21:02 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 08:17:54 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 08:17:54 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 08:17:54 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 08:17:54 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.16,},NodeAddress{Type:ExternalIP,Address:35.227.188.214,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-vlq2.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-vlq2.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:240e9092aae0ae79fa5461368e619ce5,SystemUUID:240e9092-aae0-ae79-fa54-61368e619ce5,BootID:02338211-5cf4-4ba4-bf8a-82e73c605696,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 08:22:53.940: INFO: Logging kubelet events for node ca-minion-group-vlq2 Dec 1 08:22:53.985: INFO: Logging pods the kubelet thinks is on node ca-minion-group-vlq2 Dec 1 08:22:54.050: INFO: l7-default-backend-8549d69d99-n8nmc started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 08:22:54.050: INFO: Container default-http-backend ready: true, restart count 0 Dec 1 08:22:54.050: INFO: metrics-server-v0.5.2-867b8754b9-pmk4k started at 2022-12-01 05:30:20 +0000 UTC (0+2 container statuses recorded) Dec 1 08:22:54.050: INFO: Container metrics-server ready: true, restart count 1 Dec 1 08:22:54.050: INFO: Container metrics-server-nanny ready: true, restart count 0 Dec 1 08:22:54.050: INFO: kube-proxy-ca-minion-group-vlq2 started at 2022-12-01 05:20:51 +0000 UTC (0+1 container statuses recorded) Dec 1 08:22:54.050: INFO: Container kube-proxy ready: true, restart count 0 Dec 1 08:22:54.050: INFO: coredns-6d97d5ddb-gpg9p started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 08:22:54.050: INFO: Container coredns ready: true, restart count 0 Dec 1 08:22:54.050: INFO: konnectivity-agent-x9vdq started at 2022-12-01 05:21:02 +0000 UTC (0+1 container statuses recorded) Dec 1 08:22:54.050: INFO: Container konnectivity-agent ready: true, restart count 0 Dec 1 08:22:54.050: INFO: volume-snapshot-controller-0 started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 08:22:54.050: INFO: Container volume-snapshot-controller ready: true, restart count 0 Dec 1 08:22:54.050: INFO: metadata-proxy-v0.1-mvw84 started at 2022-12-01 05:20:52 +0000 UTC (0+2 container statuses recorded) Dec 1 08:22:54.050: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 08:22:54.050: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 08:22:54.227: INFO: Latency metrics for node ca-minion-group-vlq2 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "autoscaling-1091" for this suite. 12/01/22 08:22:54.228
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshouldn\'t\sscale\sup\swhen\sexpendable\spod\sis\screated\s\[Feature\:ClusterSizeAutoscalingScaleUp\]$'
test/e2e/autoscaling/cluster_size_autoscaling.go:959 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.33() test/e2e/autoscaling/cluster_size_autoscaling.go:959 +0x1e7from junit_01.xml
[BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 12/01/22 06:51:39.702 Dec 1 06:51:39.702: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename autoscaling 12/01/22 06:51:39.704 STEP: Waiting for a default service account to be provisioned in namespace 12/01/22 06:51:39.831 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 12/01/22 06:51:39.913 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:103 STEP: Initial size of ca-minion-group-1: 0 12/01/22 06:51:43.971 STEP: Initial size of ca-minion-group: 2 12/01/22 06:51:47.78 Dec 1 06:51:47.824: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Initial number of schedulable nodes: 2 12/01/22 06:51:47.869 [It] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp] test/e2e/autoscaling/cluster_size_autoscaling.go:951 STEP: Running RC which reserves 15126 MB of memory 12/01/22 06:51:47.956 STEP: creating replication controller memory-reservation in namespace autoscaling-4542 12/01/22 06:51:47.957 I1201 06:51:48.004729 7918 runners.go:193] Created replication controller with name: memory-reservation, namespace: autoscaling-4542, replica count: 3 I1201 06:51:58.106130 7918 runners.go:193] memory-reservation Pods: 3 out of 3 created, 2 running, 0 pending, 1 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1201 06:52:08.107043 7918 runners.go:193] memory-reservation Pods: 3 out of 3 created, 2 running, 0 pending, 1 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1201 06:52:08.153269 7918 runners.go:193] Pod memory-reservation-gzstw ca-minion-group-r6jd Running <nil> I1201 06:52:08.153368 7918 runners.go:193] Pod memory-reservation-hwqn9 ca-minion-group-vlq2 Running <nil> I1201 06:52:08.153392 7918 runners.go:193] Pod memory-reservation-s8qq9 Pending <nil> STEP: Waiting for scale up hoping it won't happen, sleep for 5m0s 12/01/22 06:52:08.153 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp] (Spec Runtime: 5m8.167s) test/e2e/autoscaling/cluster_size_autoscaling.go:951 In [It] (Node Runtime: 5m0s) test/e2e/autoscaling/cluster_size_autoscaling.go:951 At [By Step] Waiting for scale up hoping it won't happen, sleep for 5m0s (Step Runtime: 4m39.716s) test/e2e/autoscaling/cluster_size_autoscaling.go:956 Spec Goroutine goroutine 6388 [sleep, 4 minutes] time.Sleep(0x45d964b800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.33() test/e2e/autoscaling/cluster_size_autoscaling.go:957 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002610300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp] (Spec Runtime: 5m28.168s) test/e2e/autoscaling/cluster_size_autoscaling.go:951 In [It] (Node Runtime: 5m20.001s) test/e2e/autoscaling/cluster_size_autoscaling.go:951 At [By Step] Waiting for scale up hoping it won't happen, sleep for 5m0s (Step Runtime: 4m59.717s) test/e2e/autoscaling/cluster_size_autoscaling.go:956 Spec Goroutine goroutine 6388 [sleep, 4 minutes] time.Sleep(0x45d964b800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.33() test/e2e/autoscaling/cluster_size_autoscaling.go:957 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002610300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Dec 1 06:57:08.320: INFO: Condition Ready of node ca-minion-group-r6jd is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669876963 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669877776 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:56:52 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:56:57 +0000 UTC}]. Failure I1201 06:57:08.320676 7918 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp] (Spec Runtime: 5m48.169s) test/e2e/autoscaling/cluster_size_autoscaling.go:951 In [It] (Node Runtime: 5m40.002s) test/e2e/autoscaling/cluster_size_autoscaling.go:951 At [By Step] Waiting for scale up hoping it won't happen, sleep for 5m0s (Step Runtime: 5m19.718s) test/e2e/autoscaling/cluster_size_autoscaling.go:956 Spec Goroutine goroutine 6388 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc00205b860}, 0xc001993f28, 0x3b9aca00, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.33() test/e2e/autoscaling/cluster_size_autoscaling.go:959 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002610300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Dec 1 06:57:28.321: INFO: Unexpected error: <*errors.errorString | 0xc00100ebd0>: { s: "timeout waiting 1s for appropriate cluster size", } Dec 1 06:57:28.321: FAIL: timeout waiting 1s for appropriate cluster size Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.33() test/e2e/autoscaling/cluster_size_autoscaling.go:959 +0x1e7 STEP: deleting ReplicationController memory-reservation in namespace autoscaling-4542, will wait for the garbage collector to delete the pods 12/01/22 06:57:28.321 Dec 1 06:57:28.460: INFO: Deleting ReplicationController memory-reservation took: 45.283446ms Dec 1 06:57:28.561: INFO: Terminating ReplicationController memory-reservation pods took: 100.3499ms ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp] (Spec Runtime: 6m8.169s) test/e2e/autoscaling/cluster_size_autoscaling.go:951 In [It] (Node Runtime: 6m0.003s) test/e2e/autoscaling/cluster_size_autoscaling.go:951 At [By Step] deleting ReplicationController memory-reservation in namespace autoscaling-4542, will wait for the garbage collector to delete the pods (Step Runtime: 19.55s) test/e2e/framework/resource/resources.go:69 Spec Goroutine goroutine 6388 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004475518, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x80?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x180?, 0xc0019936d0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x3a0bb5d?, 0x7fa7740?, 0xc000218b80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/resource.waitForPodsGone(0xc00152ca60, 0x4?, 0x764f42c?) test/e2e/framework/resource/resources.go:177 > k8s.io/kubernetes/test/e2e/framework/resource.deleteObjectAndWaitForGC({0x801de88, 0xc00205b860}, {0x7fb8040, 0xc000b57e40}, 0xc004bf83c0, {0xc00405eda0, 0x10}, {0x75f43b2, 0x12}, {0x7605c17, ...}) test/e2e/framework/resource/resources.go:167 > k8s.io/kubernetes/test/e2e/framework/resource.DeleteResourceAndWaitForGC({0x801de88?, 0xc00205b860}, {{0x0?, 0x0?}, {0x7605c17?, 0x0?}}, {0xc00405eda0, 0x10}, {0x75f43b2, 0x12}) test/e2e/framework/resource/resources.go:83 k8s.io/kubernetes/test/e2e/framework/rc.DeleteRCAndWaitForGC(...) test/e2e/framework/rc/rc_utils.go:74 > k8s.io/kubernetes/test/e2e/autoscaling.reserveMemory.func1() test/e2e/autoscaling/cluster_size_autoscaling.go:1332 panic({0x70eb7e0, 0xc0005cf490}) /usr/local/go/src/runtime/panic.go:884 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc004eb4eb0, 0x44}, {0xc001993db0?, 0x75b521a?, 0xc001993dd0?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0045e0ea0, 0x2f}, {0xc001993e48?, 0xc0045e0ea0?, 0xc001993e70?}) test/e2e/framework/log.go:61 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3ee0, 0xc00100ebd0}, {0x0?, 0x0?, 0xc000d12b40?}) test/e2e/framework/expect.go:76 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.33() test/e2e/autoscaling/cluster_size_autoscaling.go:959 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002610300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp] (Spec Runtime: 6m28.171s) test/e2e/autoscaling/cluster_size_autoscaling.go:951 In [It] (Node Runtime: 6m20.004s) test/e2e/autoscaling/cluster_size_autoscaling.go:951 At [By Step] deleting ReplicationController memory-reservation in namespace autoscaling-4542, will wait for the garbage collector to delete the pods (Step Runtime: 39.552s) test/e2e/framework/resource/resources.go:69 Spec Goroutine goroutine 6388 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004475518, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x80?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x180?, 0xc0019936d0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x3a0bb5d?, 0x7fa7740?, 0xc000218b80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/resource.waitForPodsGone(0xc00152ca60, 0x4?, 0x764f42c?) test/e2e/framework/resource/resources.go:177 > k8s.io/kubernetes/test/e2e/framework/resource.deleteObjectAndWaitForGC({0x801de88, 0xc00205b860}, {0x7fb8040, 0xc000b57e40}, 0xc004bf83c0, {0xc00405eda0, 0x10}, {0x75f43b2, 0x12}, {0x7605c17, ...}) test/e2e/framework/resource/resources.go:167 > k8s.io/kubernetes/test/e2e/framework/resource.DeleteResourceAndWaitForGC({0x801de88?, 0xc00205b860}, {{0x0?, 0x0?}, {0x7605c17?, 0x0?}}, {0xc00405eda0, 0x10}, {0x75f43b2, 0x12}) test/e2e/framework/resource/resources.go:83 k8s.io/kubernetes/test/e2e/framework/rc.DeleteRCAndWaitForGC(...) test/e2e/framework/rc/rc_utils.go:74 > k8s.io/kubernetes/test/e2e/autoscaling.reserveMemory.func1() test/e2e/autoscaling/cluster_size_autoscaling.go:1332 panic({0x70eb7e0, 0xc0005cf490}) /usr/local/go/src/runtime/panic.go:884 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc004eb4eb0, 0x44}, {0xc001993db0?, 0x75b521a?, 0xc001993dd0?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0045e0ea0, 0x2f}, {0xc001993e48?, 0xc0045e0ea0?, 0xc001993e70?}) test/e2e/framework/log.go:61 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3ee0, 0xc00100ebd0}, {0x0?, 0x0?, 0xc000d12b40?}) test/e2e/framework/expect.go:76 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.33() test/e2e/autoscaling/cluster_size_autoscaling.go:959 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002610300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp] (Spec Runtime: 6m48.173s) test/e2e/autoscaling/cluster_size_autoscaling.go:951 In [It] (Node Runtime: 6m40.006s) test/e2e/autoscaling/cluster_size_autoscaling.go:951 At [By Step] deleting ReplicationController memory-reservation in namespace autoscaling-4542, will wait for the garbage collector to delete the pods (Step Runtime: 59.554s) test/e2e/framework/resource/resources.go:69 Spec Goroutine goroutine 6388 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004475518, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x80?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x180?, 0xc0019936d0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x3a0bb5d?, 0x7fa7740?, 0xc000218b80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/resource.waitForPodsGone(0xc00152ca60, 0x4?, 0x764f42c?) test/e2e/framework/resource/resources.go:177 > k8s.io/kubernetes/test/e2e/framework/resource.deleteObjectAndWaitForGC({0x801de88, 0xc00205b860}, {0x7fb8040, 0xc000b57e40}, 0xc004bf83c0, {0xc00405eda0, 0x10}, {0x75f43b2, 0x12}, {0x7605c17, ...}) test/e2e/framework/resource/resources.go:167 > k8s.io/kubernetes/test/e2e/framework/resource.DeleteResourceAndWaitForGC({0x801de88?, 0xc00205b860}, {{0x0?, 0x0?}, {0x7605c17?, 0x0?}}, {0xc00405eda0, 0x10}, {0x75f43b2, 0x12}) test/e2e/framework/resource/resources.go:83 k8s.io/kubernetes/test/e2e/framework/rc.DeleteRCAndWaitForGC(...) test/e2e/framework/rc/rc_utils.go:74 > k8s.io/kubernetes/test/e2e/autoscaling.reserveMemory.func1() test/e2e/autoscaling/cluster_size_autoscaling.go:1332 panic({0x70eb7e0, 0xc0005cf490}) /usr/local/go/src/runtime/panic.go:884 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc004eb4eb0, 0x44}, {0xc001993db0?, 0x75b521a?, 0xc001993dd0?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0045e0ea0, 0x2f}, {0xc001993e48?, 0xc0045e0ea0?, 0xc001993e70?}) test/e2e/framework/log.go:61 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3ee0, 0xc00100ebd0}, {0x0?, 0x0?, 0xc000d12b40?}) test/e2e/framework/expect.go:76 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.33() test/e2e/autoscaling/cluster_size_autoscaling.go:959 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002610300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp] (Spec Runtime: 7m8.175s) test/e2e/autoscaling/cluster_size_autoscaling.go:951 In [It] (Node Runtime: 7m0.008s) test/e2e/autoscaling/cluster_size_autoscaling.go:951 At [By Step] deleting ReplicationController memory-reservation in namespace autoscaling-4542, will wait for the garbage collector to delete the pods (Step Runtime: 1m19.556s) test/e2e/framework/resource/resources.go:69 Spec Goroutine goroutine 6388 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004475518, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x80?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x180?, 0xc0019936d0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x3a0bb5d?, 0x7fa7740?, 0xc000218b80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/resource.waitForPodsGone(0xc00152ca60, 0x4?, 0x764f42c?) test/e2e/framework/resource/resources.go:177 > k8s.io/kubernetes/test/e2e/framework/resource.deleteObjectAndWaitForGC({0x801de88, 0xc00205b860}, {0x7fb8040, 0xc000b57e40}, 0xc004bf83c0, {0xc00405eda0, 0x10}, {0x75f43b2, 0x12}, {0x7605c17, ...}) test/e2e/framework/resource/resources.go:167 > k8s.io/kubernetes/test/e2e/framework/resource.DeleteResourceAndWaitForGC({0x801de88?, 0xc00205b860}, {{0x0?, 0x0?}, {0x7605c17?, 0x0?}}, {0xc00405eda0, 0x10}, {0x75f43b2, 0x12}) test/e2e/framework/resource/resources.go:83 k8s.io/kubernetes/test/e2e/framework/rc.DeleteRCAndWaitForGC(...) test/e2e/framework/rc/rc_utils.go:74 > k8s.io/kubernetes/test/e2e/autoscaling.reserveMemory.func1() test/e2e/autoscaling/cluster_size_autoscaling.go:1332 panic({0x70eb7e0, 0xc0005cf490}) /usr/local/go/src/runtime/panic.go:884 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc004eb4eb0, 0x44}, {0xc001993db0?, 0x75b521a?, 0xc001993dd0?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0045e0ea0, 0x2f}, {0xc001993e48?, 0xc0045e0ea0?, 0xc001993e70?}) test/e2e/framework/log.go:61 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3ee0, 0xc00100ebd0}, {0x0?, 0x0?, 0xc000d12b40?}) test/e2e/framework/expect.go:76 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.33() test/e2e/autoscaling/cluster_size_autoscaling.go:959 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002610300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp] (Spec Runtime: 7m28.176s) test/e2e/autoscaling/cluster_size_autoscaling.go:951 In [It] (Node Runtime: 7m20.01s) test/e2e/autoscaling/cluster_size_autoscaling.go:951 At [By Step] deleting ReplicationController memory-reservation in namespace autoscaling-4542, will wait for the garbage collector to delete the pods (Step Runtime: 1m39.557s) test/e2e/framework/resource/resources.go:69 Spec Goroutine goroutine 6388 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004475518, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x80?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x180?, 0xc0019936d0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x3a0bb5d?, 0x7fa7740?, 0xc000218b80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/resource.waitForPodsGone(0xc00152ca60, 0x4?, 0x764f42c?) test/e2e/framework/resource/resources.go:177 > k8s.io/kubernetes/test/e2e/framework/resource.deleteObjectAndWaitForGC({0x801de88, 0xc00205b860}, {0x7fb8040, 0xc000b57e40}, 0xc004bf83c0, {0xc00405eda0, 0x10}, {0x75f43b2, 0x12}, {0x7605c17, ...}) test/e2e/framework/resource/resources.go:167 > k8s.io/kubernetes/test/e2e/framework/resource.DeleteResourceAndWaitForGC({0x801de88?, 0xc00205b860}, {{0x0?, 0x0?}, {0x7605c17?, 0x0?}}, {0xc00405eda0, 0x10}, {0x75f43b2, 0x12}) test/e2e/framework/resource/resources.go:83 k8s.io/kubernetes/test/e2e/framework/rc.DeleteRCAndWaitForGC(...) test/e2e/framework/rc/rc_utils.go:74 > k8s.io/kubernetes/test/e2e/autoscaling.reserveMemory.func1() test/e2e/autoscaling/cluster_size_autoscaling.go:1332 panic({0x70eb7e0, 0xc0005cf490}) /usr/local/go/src/runtime/panic.go:884 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc004eb4eb0, 0x44}, {0xc001993db0?, 0x75b521a?, 0xc001993dd0?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0045e0ea0, 0x2f}, {0xc001993e48?, 0xc0045e0ea0?, 0xc001993e70?}) test/e2e/framework/log.go:61 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3ee0, 0xc00100ebd0}, {0x0?, 0x0?, 0xc000d12b40?}) test/e2e/framework/expect.go:76 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.33() test/e2e/autoscaling/cluster_size_autoscaling.go:959 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002610300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/node/init/init.go:32 Dec 1 06:59:27.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:139 STEP: Restoring initial size of the cluster 12/01/22 06:59:27.349 STEP: Setting size of ca-minion-group to 2 12/01/22 06:59:34.903 Dec 1 06:59:34.903: INFO: Skipping dumping logs from cluster Dec 1 06:59:40.777: INFO: Skipping dumping logs from cluster Dec 1 06:59:40.824: INFO: Waiting for ready nodes 2, current ready 1, not ready nodes 0 Dec 1 07:00:00.876: INFO: Waiting for ready nodes 2, current ready 1, not ready nodes 0 Dec 1 07:00:20.922: INFO: Waiting for ready nodes 2, current ready 1, not ready nodes 0 Dec 1 07:00:40.966: INFO: Waiting for ready nodes 2, current ready 1, not ready nodes 0 Dec 1 07:01:01.013: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Remove taint from node ca-master 12/01/22 07:01:01.059 STEP: Remove taint from node ca-minion-group-086n 12/01/22 07:01:01.101 STEP: Remove taint from node ca-minion-group-vlq2 12/01/22 07:01:01.143 I1201 07:01:01.187546 7918 cluster_size_autoscaling.go:165] Made nodes schedulable again in 128.511034ms [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 12/01/22 07:01:01.187 STEP: Collecting events from namespace "autoscaling-4542". 12/01/22 07:01:01.187 STEP: Found 16 events. 12/01/22 07:01:01.231 Dec 1 07:01:01.231: INFO: At 2022-12-01 06:51:48 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-hwqn9 Dec 1 07:01:01.231: INFO: At 2022-12-01 06:51:48 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-s8qq9 Dec 1 07:01:01.231: INFO: At 2022-12-01 06:51:48 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-gzstw Dec 1 07:01:01.231: INFO: At 2022-12-01 06:51:48 +0000 UTC - event for memory-reservation-gzstw: {default-scheduler } Scheduled: Successfully assigned autoscaling-4542/memory-reservation-gzstw to ca-minion-group-r6jd Dec 1 07:01:01.231: INFO: At 2022-12-01 06:51:48 +0000 UTC - event for memory-reservation-hwqn9: {default-scheduler } Scheduled: Successfully assigned autoscaling-4542/memory-reservation-hwqn9 to ca-minion-group-vlq2 Dec 1 07:01:01.231: INFO: At 2022-12-01 06:51:48 +0000 UTC - event for memory-reservation-hwqn9: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 07:01:01.231: INFO: At 2022-12-01 06:51:48 +0000 UTC - event for memory-reservation-hwqn9: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 07:01:01.231: INFO: At 2022-12-01 06:51:48 +0000 UTC - event for memory-reservation-hwqn9: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 07:01:01.231: INFO: At 2022-12-01 06:51:48 +0000 UTC - event for memory-reservation-s8qq9: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 3 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.. Dec 1 07:01:01.231: INFO: At 2022-12-01 06:51:49 +0000 UTC - event for memory-reservation-gzstw: {kubelet ca-minion-group-r6jd} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 07:01:01.231: INFO: At 2022-12-01 06:51:49 +0000 UTC - event for memory-reservation-gzstw: {kubelet ca-minion-group-r6jd} Created: Created container memory-reservation Dec 1 07:01:01.231: INFO: At 2022-12-01 06:51:49 +0000 UTC - event for memory-reservation-gzstw: {kubelet ca-minion-group-r6jd} Started: Started container memory-reservation Dec 1 07:01:01.231: INFO: At 2022-12-01 06:56:52 +0000 UTC - event for memory-reservation-gzstw: {node-controller } NodeNotReady: Node is not ready Dec 1 07:01:01.231: INFO: At 2022-12-01 06:57:28 +0000 UTC - event for memory-reservation-hwqn9: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 07:01:01.231: INFO: At 2022-12-01 06:57:28 +0000 UTC - event for memory-reservation-s8qq9: {default-scheduler } FailedScheduling: skip schedule deleting pod: autoscaling-4542/memory-reservation-s8qq9 Dec 1 07:01:01.231: INFO: At 2022-12-01 06:59:27 +0000 UTC - event for memory-reservation-gzstw: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod autoscaling-4542/memory-reservation-gzstw Dec 1 07:01:01.273: INFO: POD NODE PHASE GRACE CONDITIONS Dec 1 07:01:01.273: INFO: Dec 1 07:01:01.319: INFO: Logging node info for node ca-master Dec 1 07:01:01.362: INFO: Node Info: &Node{ObjectMeta:{ca-master a2126acf-72e0-4c73-a9ef-ce1238132582 29323 0 2022-12-01 04:35:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 04:35:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-12-01 04:35:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-12-01 04:35:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-12-01 06:58:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 04:35:50 +0000 UTC,LastTransitionTime:2022-12-01 04:35:50 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 06:58:41 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 06:58:41 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 06:58:41 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 06:58:41 +0000 UTC,LastTransitionTime:2022-12-01 04:35:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.105.118.216,},NodeAddress{Type:InternalDNS,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bac070a384fefb9133ee7878e15673cf,SystemUUID:bac070a3-84fe-fb91-33ee-7878e15673cf,BootID:39b8786f-3724-43ea-9f9b-9333f7876ff8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/autoscaling/cluster-autoscaler@sha256:07ab8c89cd0ad296ddb6347febe196d8fe0e1c757656a98f71199860d87cf1a5 registry.k8s.io/autoscaling/cluster-autoscaler:v1.22.0],SizeBytes:24220268,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 07:01:01.362: INFO: Logging kubelet events for node ca-master Dec 1 07:01:01.408: INFO: Logging pods the kubelet thinks is on node ca-master Dec 1 07:01:01.518: INFO: cluster-autoscaler-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 07:01:01.518: INFO: Container cluster-autoscaler ready: true, restart count 2 Dec 1 07:01:01.518: INFO: l7-lb-controller-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 07:01:01.518: INFO: Container l7-lb-controller ready: true, restart count 2 Dec 1 07:01:01.518: INFO: kube-apiserver-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 07:01:01.518: INFO: Container kube-apiserver ready: true, restart count 0 Dec 1 07:01:01.518: INFO: kube-controller-manager-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 07:01:01.518: INFO: Container kube-controller-manager ready: true, restart count 1 Dec 1 07:01:01.518: INFO: etcd-server-events-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 07:01:01.518: INFO: Container etcd-container ready: true, restart count 0 Dec 1 07:01:01.518: INFO: etcd-server-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 07:01:01.518: INFO: Container etcd-container ready: true, restart count 0 Dec 1 07:01:01.518: INFO: kube-addon-manager-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 07:01:01.518: INFO: Container kube-addon-manager ready: true, restart count 0 Dec 1 07:01:01.518: INFO: konnectivity-server-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 07:01:01.518: INFO: Container konnectivity-server-container ready: true, restart count 0 Dec 1 07:01:01.518: INFO: kube-scheduler-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 07:01:01.518: INFO: Container kube-scheduler ready: true, restart count 0 Dec 1 07:01:01.518: INFO: metadata-proxy-v0.1-4rrgr started at 2022-12-01 04:35:35 +0000 UTC (0+2 container statuses recorded) Dec 1 07:01:01.518: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 07:01:01.518: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 07:01:01.753: INFO: Latency metrics for node ca-master Dec 1 07:01:01.753: INFO: Logging node info for node ca-minion-group-086n Dec 1 07:01:01.799: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-086n b3402b73-ddd5-4dd1-8d81-13b12f8160f1 29713 0 2022-12-01 07:00:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-086n kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 07:00:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-12-01 07:00:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-12-01 07:00:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-12-01 07:00:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.29.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-12-01 07:00:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.29.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-086n,Unschedulable:false,Taints:[]Taint{Taint{Key:DeletionCandidateOfClusterAutoscaler,Value:1669878059,Effect:PreferNoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.29.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-12-01 07:00:55 +0000 UTC,LastTransitionTime:2022-12-01 07:00:54 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-12-01 07:00:55 +0000 UTC,LastTransitionTime:2022-12-01 07:00:54 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-12-01 07:00:55 +0000 UTC,LastTransitionTime:2022-12-01 07:00:54 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-12-01 07:00:55 +0000 UTC,LastTransitionTime:2022-12-01 07:00:54 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-12-01 07:00:55 +0000 UTC,LastTransitionTime:2022-12-01 07:00:54 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-12-01 07:00:55 +0000 UTC,LastTransitionTime:2022-12-01 07:00:54 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-12-01 07:00:55 +0000 UTC,LastTransitionTime:2022-12-01 07:00:54 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 07:00:59 +0000 UTC,LastTransitionTime:2022-12-01 07:00:59 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 07:00:51 +0000 UTC,LastTransitionTime:2022-12-01 07:00:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 07:00:51 +0000 UTC,LastTransitionTime:2022-12-01 07:00:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 07:00:51 +0000 UTC,LastTransitionTime:2022-12-01 07:00:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 07:00:51 +0000 UTC,LastTransitionTime:2022-12-01 07:00:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.32,},NodeAddress{Type:ExternalIP,Address:34.105.95.26,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-086n.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-086n.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d9a6db093ced157e1678d0114ad119d,SystemUUID:5d9a6db0-93ce-d157-e167-8d0114ad119d,BootID:c7bec5ab-9949-4c82-9dbb-6a37b22f31e0,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:67201736,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 07:01:01.799: INFO: Logging kubelet events for node ca-minion-group-086n Dec 1 07:01:01.851: INFO: Logging pods the kubelet thinks is on node ca-minion-group-086n Dec 1 07:01:01.939: INFO: konnectivity-agent-tqnr7 started at 2022-12-01 07:01:00 +0000 UTC (0+1 container statuses recorded) Dec 1 07:01:01.939: INFO: Container konnectivity-agent ready: true, restart count 0 Dec 1 07:01:01.939: INFO: kube-proxy-ca-minion-group-086n started at 2022-12-01 07:00:50 +0000 UTC (0+1 container statuses recorded) Dec 1 07:01:01.939: INFO: Container kube-proxy ready: true, restart count 0 Dec 1 07:01:01.939: INFO: metadata-proxy-v0.1-xfkcn started at 2022-12-01 07:00:51 +0000 UTC (0+2 container statuses recorded) Dec 1 07:01:01.939: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 07:01:01.939: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 07:01:02.119: INFO: Latency metrics for node ca-minion-group-086n Dec 1 07:01:02.119: INFO: Logging node info for node ca-minion-group-vlq2 Dec 1 07:01:02.161: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-vlq2 132befc3-8b36-49c3-8aee-8af679afd99a 28944 0 2022-12-01 05:20:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-vlq2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 05:20:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-01 05:21:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.14.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-12-01 05:21:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-12-01 06:56:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-01 06:56:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.14.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-vlq2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.14.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-12-01 06:56:06 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-12-01 06:56:06 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-12-01 06:56:06 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-12-01 06:56:06 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-12-01 06:56:06 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-12-01 06:56:06 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-12-01 06:56:06 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 05:21:02 +0000 UTC,LastTransitionTime:2022-12-01 05:21:02 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 06:56:16 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 06:56:16 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 06:56:16 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 06:56:16 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.16,},NodeAddress{Type:ExternalIP,Address:35.227.188.214,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-vlq2.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-vlq2.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:240e9092aae0ae79fa5461368e619ce5,SystemUUID:240e9092-aae0-ae79-fa54-61368e619ce5,BootID:02338211-5cf4-4ba4-bf8a-82e73c605696,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 07:01:02.162: INFO: Logging kubelet events for node ca-minion-group-vlq2 Dec 1 07:01:02.208: INFO: Logging pods the kubelet thinks is on node ca-minion-group-vlq2 Dec 1 07:01:02.382: INFO: konnectivity-agent-x9vdq started at 2022-12-01 05:21:02 +0000 UTC (0+1 container statuses recorded) Dec 1 07:01:02.382: INFO: Container konnectivity-agent ready: true, restart count 0 Dec 1 07:01:02.382: INFO: volume-snapshot-controller-0 started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 07:01:02.382: INFO: Container volume-snapshot-controller ready: true, restart count 0 Dec 1 07:01:02.382: INFO: metadata-proxy-v0.1-mvw84 started at 2022-12-01 05:20:52 +0000 UTC (0+2 container statuses recorded) Dec 1 07:01:02.382: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 07:01:02.382: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 07:01:02.382: INFO: l7-default-backend-8549d69d99-n8nmc started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 07:01:02.382: INFO: Container default-http-backend ready: true, restart count 0 Dec 1 07:01:02.382: INFO: metrics-server-v0.5.2-867b8754b9-pmk4k started at 2022-12-01 05:30:20 +0000 UTC (0+2 container statuses recorded) Dec 1 07:01:02.382: INFO: Container metrics-server ready: true, restart count 1 Dec 1 07:01:02.382: INFO: Container metrics-server-nanny ready: true, restart count 0 Dec 1 07:01:02.382: INFO: kube-proxy-ca-minion-group-vlq2 started at 2022-12-01 05:20:51 +0000 UTC (0+1 container statuses recorded) Dec 1 07:01:02.382: INFO: Container kube-proxy ready: true, restart count 0 Dec 1 07:01:02.382: INFO: coredns-6d97d5ddb-gpg9p started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 07:01:02.382: INFO: Container coredns ready: true, restart count 0 Dec 1 07:01:02.560: INFO: Latency metrics for node ca-minion-group-vlq2 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "autoscaling-4542" for this suite. 12/01/22 07:01:02.56
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshouldn\'t\strigger\sadditional\sscale\-ups\sduring\sprocessing\sscale\-up\s\[Feature\:ClusterSizeAutoscalingScaleUp\]$'
test/e2e/autoscaling/cluster_size_autoscaling.go:361 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.11() test/e2e/autoscaling/cluster_size_autoscaling.go:361 +0x2ccfrom junit_01.xml
[BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 12/01/22 06:04:23.31 Dec 1 06:04:23.310: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename autoscaling 12/01/22 06:04:23.311 STEP: Waiting for a default service account to be provisioned in namespace 12/01/22 06:04:23.438 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 12/01/22 06:04:23.519 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:103 STEP: Initial size of ca-minion-group-1: 0 12/01/22 06:04:27.258 STEP: Initial size of ca-minion-group: 2 12/01/22 06:04:31.308 Dec 1 06:04:31.353: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Initial number of schedulable nodes: 2 12/01/22 06:04:31.396 [It] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp] test/e2e/autoscaling/cluster_size_autoscaling.go:334 I1201 06:04:31.439554 7918 cluster_size_autoscaling.go:1863] Cluster-Autoscaler scale-up status: NoActivity (2, 2) STEP: Schedule more pods than can fit and wait for cluster to scale-up 12/01/22 06:04:31.439 STEP: Running RC which reserves 14406 MB of memory 12/01/22 06:04:31.439 STEP: creating replication controller memory-reservation in namespace autoscaling-1275 12/01/22 06:04:31.439 I1201 06:04:31.486196 7918 runners.go:193] Created replication controller with name: memory-reservation, namespace: autoscaling-1275, replica count: 100 I1201 06:04:41.587233 7918 runners.go:193] memory-reservation Pods: 100 out of 100 created, 0 running, 96 pending, 4 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1201 06:04:51.588247 7918 runners.go:193] memory-reservation Pods: 100 out of 100 created, 3 running, 93 pending, 4 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1201 06:05:01.589166 7918 runners.go:193] memory-reservation Pods: 100 out of 100 created, 15 running, 81 pending, 4 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1201 06:05:11.589443 7918 runners.go:193] memory-reservation Pods: 100 out of 100 created, 62 running, 34 pending, 4 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1201 06:05:21.589808 7918 runners.go:193] memory-reservation Pods: 100 out of 100 created, 96 running, 0 pending, 4 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1201 06:05:31.590264 7918 runners.go:193] memory-reservation Pods: 100 out of 100 created, 96 running, 0 pending, 4 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1201 06:05:31.648584 7918 runners.go:193] Pod memory-reservation-2n4z2 ca-minion-group-vlq2 Running <nil> I1201 06:05:31.648658 7918 runners.go:193] Pod memory-reservation-2sr99 ca-minion-group-m9zb Running <nil> I1201 06:05:31.648674 7918 runners.go:193] Pod memory-reservation-4d4jt ca-minion-group-vlq2 Running <nil> I1201 06:05:31.648686 7918 runners.go:193] Pod memory-reservation-4hnmm ca-minion-group-vlq2 Running <nil> I1201 06:05:31.648697 7918 runners.go:193] Pod memory-reservation-4tqz7 ca-minion-group-m9zb Running <nil> I1201 06:05:31.648708 7918 runners.go:193] Pod memory-reservation-4zdfz ca-minion-group-vlq2 Running <nil> I1201 06:05:31.648719 7918 runners.go:193] Pod memory-reservation-5bqd4 ca-minion-group-vlq2 Running <nil> I1201 06:05:31.648735 7918 runners.go:193] Pod memory-reservation-5snmx ca-minion-group-vlq2 Running <nil> I1201 06:05:31.648750 7918 runners.go:193] Pod memory-reservation-5t92g ca-minion-group-m9zb Running <nil> I1201 06:05:31.648765 7918 runners.go:193] Pod memory-reservation-5x4k5 ca-minion-group-m9zb Running <nil> I1201 06:05:31.648780 7918 runners.go:193] Pod memory-reservation-6k27s ca-minion-group-m9zb Running <nil> I1201 06:05:31.648796 7918 runners.go:193] Pod memory-reservation-6p7jq ca-minion-group-m9zb Running <nil> I1201 06:05:31.648811 7918 runners.go:193] Pod memory-reservation-6pq22 ca-minion-group-m9zb Running <nil> I1201 06:05:31.648826 7918 runners.go:193] Pod memory-reservation-6rcdt ca-minion-group-m9zb Running <nil> I1201 06:05:31.648841 7918 runners.go:193] Pod memory-reservation-6tj8k ca-minion-group-m9zb Running <nil> I1201 06:05:31.648852 7918 runners.go:193] Pod memory-reservation-6txrt ca-minion-group-vlq2 Running <nil> I1201 06:05:31.648864 7918 runners.go:193] Pod memory-reservation-6vz4p ca-minion-group-m9zb Running <nil> I1201 06:05:31.648874 7918 runners.go:193] Pod memory-reservation-77dxd ca-minion-group-vlq2 Running <nil> I1201 06:05:31.648886 7918 runners.go:193] Pod memory-reservation-7grfj ca-minion-group-m9zb Running <nil> I1201 06:05:31.648898 7918 runners.go:193] Pod memory-reservation-7pgk7 ca-minion-group-vlq2 Running <nil> I1201 06:05:31.648909 7918 runners.go:193] Pod memory-reservation-84pcv ca-minion-group-vlq2 Running <nil> I1201 06:05:31.648925 7918 runners.go:193] Pod memory-reservation-8bnxf ca-minion-group-m9zb Running <nil> I1201 06:05:31.648936 7918 runners.go:193] Pod memory-reservation-8dxhg ca-minion-group-vlq2 Running <nil> I1201 06:05:31.648948 7918 runners.go:193] Pod memory-reservation-8rgm4 ca-minion-group-m9zb Running <nil> I1201 06:05:31.648959 7918 runners.go:193] Pod memory-reservation-8tpbq ca-minion-group-vlq2 Running <nil> I1201 06:05:31.648970 7918 runners.go:193] Pod memory-reservation-94fvp ca-minion-group-vlq2 Running <nil> I1201 06:05:31.648983 7918 runners.go:193] Pod memory-reservation-96wnw ca-minion-group-m9zb Running <nil> I1201 06:05:31.648994 7918 runners.go:193] Pod memory-reservation-9fcnw ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649010 7918 runners.go:193] Pod memory-reservation-9r4qk ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649024 7918 runners.go:193] Pod memory-reservation-9rc4n Pending <nil> I1201 06:05:31.649036 7918 runners.go:193] Pod memory-reservation-b99kx ca-minion-group-m9zb Running <nil> I1201 06:05:31.649048 7918 runners.go:193] Pod memory-reservation-bzmrx ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649059 7918 runners.go:193] Pod memory-reservation-cdwtq ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649070 7918 runners.go:193] Pod memory-reservation-clq9r ca-minion-group-m9zb Running <nil> I1201 06:05:31.649082 7918 runners.go:193] Pod memory-reservation-ct28f ca-minion-group-m9zb Running <nil> I1201 06:05:31.649094 7918 runners.go:193] Pod memory-reservation-cx8b4 ca-minion-group-m9zb Running <nil> I1201 06:05:31.649106 7918 runners.go:193] Pod memory-reservation-d47bl ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649128 7918 runners.go:193] Pod memory-reservation-dkbcc ca-minion-group-m9zb Running <nil> I1201 06:05:31.649143 7918 runners.go:193] Pod memory-reservation-dkdmk ca-minion-group-m9zb Running <nil> I1201 06:05:31.649156 7918 runners.go:193] Pod memory-reservation-dsdkk ca-minion-group-m9zb Running <nil> I1201 06:05:31.649169 7918 runners.go:193] Pod memory-reservation-dtrxv ca-minion-group-m9zb Running <nil> I1201 06:05:31.649181 7918 runners.go:193] Pod memory-reservation-f4nxn ca-minion-group-m9zb Running <nil> I1201 06:05:31.649193 7918 runners.go:193] Pod memory-reservation-fhp65 ca-minion-group-m9zb Running <nil> I1201 06:05:31.649204 7918 runners.go:193] Pod memory-reservation-fkr89 ca-minion-group-m9zb Running <nil> I1201 06:05:31.649216 7918 runners.go:193] Pod memory-reservation-fr6w8 ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649228 7918 runners.go:193] Pod memory-reservation-gd8pp ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649241 7918 runners.go:193] Pod memory-reservation-gdwb8 ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649257 7918 runners.go:193] Pod memory-reservation-gkhl6 ca-minion-group-m9zb Running <nil> I1201 06:05:31.649272 7918 runners.go:193] Pod memory-reservation-hbsgj ca-minion-group-m9zb Running <nil> I1201 06:05:31.649288 7918 runners.go:193] Pod memory-reservation-hdw9f ca-minion-group-m9zb Running <nil> I1201 06:05:31.649303 7918 runners.go:193] Pod memory-reservation-hg5sr ca-minion-group-m9zb Running <nil> I1201 06:05:31.649318 7918 runners.go:193] Pod memory-reservation-hprkb ca-minion-group-m9zb Running <nil> I1201 06:05:31.649333 7918 runners.go:193] Pod memory-reservation-hzm6c ca-minion-group-m9zb Running <nil> I1201 06:05:31.649348 7918 runners.go:193] Pod memory-reservation-j8l8d ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649361 7918 runners.go:193] Pod memory-reservation-jb4jk ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649372 7918 runners.go:193] Pod memory-reservation-jdjst ca-minion-group-m9zb Running <nil> I1201 06:05:31.649387 7918 runners.go:193] Pod memory-reservation-jfrj9 ca-minion-group-m9zb Running <nil> I1201 06:05:31.649400 7918 runners.go:193] Pod memory-reservation-jlqtg ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649413 7918 runners.go:193] Pod memory-reservation-jpvmp ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649426 7918 runners.go:193] Pod memory-reservation-k46fz ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649439 7918 runners.go:193] Pod memory-reservation-k9rms ca-minion-group-m9zb Running <nil> I1201 06:05:31.649452 7918 runners.go:193] Pod memory-reservation-kcwkf ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649467 7918 runners.go:193] Pod memory-reservation-klw2n ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649480 7918 runners.go:193] Pod memory-reservation-l4zc9 ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649496 7918 runners.go:193] Pod memory-reservation-lfl54 ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649511 7918 runners.go:193] Pod memory-reservation-lzmc6 ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649530 7918 runners.go:193] Pod memory-reservation-mqh7b ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649546 7918 runners.go:193] Pod memory-reservation-mrg7r ca-minion-group-m9zb Running <nil> I1201 06:05:31.649561 7918 runners.go:193] Pod memory-reservation-mvtz9 ca-minion-group-m9zb Running <nil> I1201 06:05:31.649576 7918 runners.go:193] Pod memory-reservation-mw5dh ca-minion-group-m9zb Running <nil> I1201 06:05:31.649591 7918 runners.go:193] Pod memory-reservation-mzqw8 ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649604 7918 runners.go:193] Pod memory-reservation-nl4q6 ca-minion-group-m9zb Running <nil> I1201 06:05:31.649617 7918 runners.go:193] Pod memory-reservation-pkp92 ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649630 7918 runners.go:193] Pod memory-reservation-qn9ds ca-minion-group-m9zb Running <nil> I1201 06:05:31.649643 7918 runners.go:193] Pod memory-reservation-qr2wt ca-minion-group-m9zb Running <nil> I1201 06:05:31.649660 7918 runners.go:193] Pod memory-reservation-qr54g Pending <nil> I1201 06:05:31.649672 7918 runners.go:193] Pod memory-reservation-r2488 Pending <nil> I1201 06:05:31.649686 7918 runners.go:193] Pod memory-reservation-r2bzk ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649700 7918 runners.go:193] Pod memory-reservation-r7rdc ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649714 7918 runners.go:193] Pod memory-reservation-rh4j6 ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649728 7918 runners.go:193] Pod memory-reservation-rmfvf Pending <nil> I1201 06:05:31.649744 7918 runners.go:193] Pod memory-reservation-rntn9 ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649760 7918 runners.go:193] Pod memory-reservation-s2nzr ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649775 7918 runners.go:193] Pod memory-reservation-s7kq6 ca-minion-group-m9zb Running <nil> I1201 06:05:31.649790 7918 runners.go:193] Pod memory-reservation-scp76 ca-minion-group-m9zb Running <nil> I1201 06:05:31.649805 7918 runners.go:193] Pod memory-reservation-sqsv5 ca-minion-group-m9zb Running <nil> I1201 06:05:31.649820 7918 runners.go:193] Pod memory-reservation-t6r9c ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649835 7918 runners.go:193] Pod memory-reservation-t6wwm ca-minion-group-m9zb Running <nil> I1201 06:05:31.649848 7918 runners.go:193] Pod memory-reservation-t96jb ca-minion-group-m9zb Running <nil> I1201 06:05:31.649861 7918 runners.go:193] Pod memory-reservation-tcr7q ca-minion-group-m9zb Running <nil> I1201 06:05:31.649873 7918 runners.go:193] Pod memory-reservation-thbxc ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649886 7918 runners.go:193] Pod memory-reservation-tsgp5 ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649900 7918 runners.go:193] Pod memory-reservation-vmhf6 ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649915 7918 runners.go:193] Pod memory-reservation-wmmqt ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649929 7918 runners.go:193] Pod memory-reservation-wvr7w ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649944 7918 runners.go:193] Pod memory-reservation-xh6cp ca-minion-group-vlq2 Running <nil> I1201 06:05:31.649959 7918 runners.go:193] Pod memory-reservation-xj55n ca-minion-group-m9zb Running <nil> I1201 06:05:31.649974 7918 runners.go:193] Pod memory-reservation-xr627 ca-minion-group-m9zb Running <nil> I1201 06:05:31.649989 7918 runners.go:193] Pod memory-reservation-zc7zl ca-minion-group-m9zb Running <nil> I1201 06:05:31.650004 7918 runners.go:193] Pod memory-reservation-zv552 ca-minion-group-vlq2 Running <nil> I1201 06:05:31.694687 7918 cluster_size_autoscaling.go:1863] Cluster-Autoscaler scale-up status: InProgress (2, 3) I1201 06:05:31.756847 7918 cluster_size_autoscaling.go:1417] Too many pods are not ready yet: [memory-reservation-9rc4n memory-reservation-qr54g memory-reservation-r2488 memory-reservation-rmfvf] I1201 06:05:51.817493 7918 cluster_size_autoscaling.go:1417] Too many pods are not ready yet: [memory-reservation-9rc4n memory-reservation-qr54g memory-reservation-r2488 memory-reservation-rmfvf] I1201 06:06:11.875641 7918 cluster_size_autoscaling.go:1414] sufficient number of pods ready. Tolerating 0 unready STEP: Expect no more scale-up to be happening after all pods are scheduled 12/01/22 06:06:11.875 I1201 06:06:11.919998 7918 cluster_size_autoscaling.go:1863] Cluster-Autoscaler scale-up status: InProgress (2, 3) I1201 06:06:16.962712 7918 cluster_size_autoscaling.go:1863] Cluster-Autoscaler scale-up status: InProgress (2, 3) I1201 06:06:21.962863 7918 cluster_size_autoscaling.go:1863] Cluster-Autoscaler scale-up status: InProgress (2, 3) I1201 06:06:26.962580 7918 cluster_size_autoscaling.go:1863] Cluster-Autoscaler scale-up status: InProgress (2, 3) I1201 06:06:31.962574 7918 cluster_size_autoscaling.go:1863] Cluster-Autoscaler scale-up status: InProgress (2, 3) I1201 06:06:36.962308 7918 cluster_size_autoscaling.go:1863] Cluster-Autoscaler scale-up status: InProgress (2, 3) I1201 06:06:41.962846 7918 cluster_size_autoscaling.go:1863] Cluster-Autoscaler scale-up status: InProgress (2, 3) I1201 06:06:46.963006 7918 cluster_size_autoscaling.go:1863] Cluster-Autoscaler scale-up status: InProgress (2, 3) I1201 06:06:51.962589 7918 cluster_size_autoscaling.go:1863] Cluster-Autoscaler scale-up status: InProgress (2, 3) I1201 06:06:52.004815 7918 cluster_size_autoscaling.go:1863] Cluster-Autoscaler scale-up status: InProgress (2, 3) Dec 1 06:06:52.004: INFO: Unexpected error: <*errors.errorString | 0xc00139f1f0>: { s: "Failed to find expected scale up status: timed out waiting for the condition, last status: &{InProgress 2 3 {507947261 63805471605 <nil>}}, final err: <nil>", } Dec 1 06:06:52.005: FAIL: Failed to find expected scale up status: timed out waiting for the condition, last status: &{InProgress 2 3 {507947261 63805471605 <nil>}}, final err: <nil> Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.11() test/e2e/autoscaling/cluster_size_autoscaling.go:361 +0x2cc STEP: deleting ReplicationController memory-reservation in namespace autoscaling-1275, will wait for the garbage collector to delete the pods 12/01/22 06:06:52.005 Dec 1 06:06:52.143: INFO: Deleting ReplicationController memory-reservation took: 44.742451ms Dec 1 06:06:57.144: INFO: Terminating ReplicationController memory-reservation pods took: 5.000965757s [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/node/init/init.go:32 Dec 1 06:07:28.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:139 STEP: Restoring initial size of the cluster 12/01/22 06:07:28.188 STEP: Setting size of ca-minion-group-1 to 0 12/01/22 06:07:31.701 Dec 1 06:07:31.701: INFO: Skipping dumping logs from cluster Dec 1 06:07:36.177: INFO: Skipping dumping logs from cluster Dec 1 06:07:39.456: INFO: Waiting for ready nodes 2, current ready 3, not ready nodes 0 Dec 1 06:07:59.503: INFO: Waiting for ready nodes 2, current ready 3, not ready nodes 0 Dec 1 06:08:19.555: INFO: Condition Ready of node ca-minion-group-1-qrkr is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Dec 1 06:08:19.555: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 1 Dec 1 06:08:39.604: INFO: Condition Ready of node ca-minion-group-1-qrkr is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:08:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:08:21 +0000 UTC}]. Failure Dec 1 06:08:39.604: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 1 Dec 1 06:08:59.651: INFO: Condition Ready of node ca-minion-group-1-qrkr is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:08:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:08:21 +0000 UTC}]. Failure Dec 1 06:08:59.651: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 1 Dec 1 06:09:19.698: INFO: Condition Ready of node ca-minion-group-1-qrkr is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:08:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:08:21 +0000 UTC}]. Failure Dec 1 06:09:19.698: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 1 Dec 1 06:09:39.744: INFO: Condition Ready of node ca-minion-group-1-qrkr is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-12-01 06:08:16 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-12-01 06:08:21 +0000 UTC}]. Failure Dec 1 06:09:39.744: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 1 Dec 1 06:09:59.792: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Remove taint from node ca-master 12/01/22 06:09:59.837 STEP: Remove taint from node ca-minion-group-m9zb 12/01/22 06:09:59.88 STEP: Remove taint from node ca-minion-group-vlq2 12/01/22 06:09:59.924 I1201 06:09:59.968455 7918 cluster_size_autoscaling.go:165] Made nodes schedulable again in 131.433526ms [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 12/01/22 06:09:59.968 STEP: Collecting events from namespace "autoscaling-1275". 12/01/22 06:09:59.968 STEP: Found 556 events. 12/01/22 06:10:00.04 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-9r4qk Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-wmmqt Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-fr6w8 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-jlqtg Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-klw2n Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-8tpbq Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-84pcv Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-gd8pp Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-wvr7w Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-2n4z2: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-2n4z2 to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-4d4jt: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-4d4jt to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-4hnmm: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-4hnmm to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-5bqd4: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-5bqd4 to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-5snmx: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-5snmx to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-6txrt: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-6txrt to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-84pcv: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-84pcv to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-8dxhg: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-8dxhg to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-8tpbq: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-8tpbq to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-94fvp: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-94fvp to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-9fcnw: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-9fcnw to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-9r4qk: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-9r4qk to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-cdwtq: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-cdwtq to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-fr6w8: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-fr6w8 to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-gd8pp: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-gd8pp to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-jlqtg: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-jlqtg to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-jpvmp: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-jpvmp to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-k46fz: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-k46fz to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-klw2n: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-klw2n to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-l4zc9: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-l4zc9 to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-lzmc6: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-lzmc6 to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-mqh7b: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-mqh7b to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-mzqw8: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-mzqw8 to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-r7rdc: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-r7rdc to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-rh4j6: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-rh4j6 to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-rntn9: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-rntn9 to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-s2nzr: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-s2nzr to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-tsgp5: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-tsgp5 to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-vmhf6: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-vmhf6 to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-wmmqt: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-wmmqt to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-wvr7w: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-wvr7w to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:31 +0000 UTC - event for memory-reservation-xh6cp: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-xh6cp to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-4tqz7: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-4tqz7 to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-4zdfz: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-4zdfz to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-6k27s: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-6k27s to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-77dxd: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-77dxd to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-7pgk7: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-7pgk7 to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-8tpbq: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-zdpq7" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-bzmrx: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-bzmrx to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-d47bl: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-d47bl to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-fr6w8: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-9b62v" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-gdwb8: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-gdwb8 to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-hbsgj: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-hbsgj to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-j8l8d: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-j8l8d to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-jb4jk: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-jb4jk to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-kcwkf: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-kcwkf to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-klw2n: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-cwql6" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-lfl54: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-lfl54 to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-pkp92: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-pkp92 to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-r2bzk: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-r2bzk to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-t6r9c: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-t6r9c to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-t6wwm: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-t6wwm to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-thbxc: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-thbxc to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-wmmqt: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-m4xjd" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-zc7zl: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-zc7zl to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:32 +0000 UTC - event for memory-reservation-zv552: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-zv552 to ca-minion-group-vlq2 Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-2sr99: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-2sr99 to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-5t92g: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-5t92g to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-6p7jq: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-6p7jq to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-6vz4p: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-6vz4p to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-7grfj: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-7grfj to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-84pcv: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-njqw2" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-8bnxf: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-8bnxf to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-8rgm4: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-8rgm4 to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-96wnw: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-96wnw to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-9r4qk: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-7msnl" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-ct28f: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-ct28f to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-dkbcc: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-dkbcc to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-dsdkk: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-dsdkk to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-dtrxv: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-dtrxv to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-f4nxn: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-f4nxn to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-fkr89: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-fkr89 to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-gd8pp: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-n7vjg" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-hg5sr: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-hg5sr to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-hzm6c: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-hzm6c to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-k9rms: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-k9rms to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-mrg7r: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-mrg7r to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-sqsv5: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-sqsv5 to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-t96jb: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-t96jb to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-tcr7q: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-tcr7q to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-vmhf6: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-vwvwd" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:33 +0000 UTC - event for memory-reservation-xr627: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-xr627 to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-2n4z2: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-dhkjx" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-5snmx: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-mnr9t" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-5x4k5: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-5x4k5 to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-6pq22: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-6pq22 to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-6rcdt: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-6rcdt to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-6tj8k: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-6tj8k to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-9fcnw: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-mv277" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-b99kx: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-b99kx to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-cdwtq: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-t45vc" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-clq9r: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-clq9r to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-fhp65: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-fhp65 to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-gkhl6: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-gkhl6 to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-hdw9f: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-hdw9f to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-hprkb: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-hprkb to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-jdjst: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-jdjst to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-jfrj9: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-jfrj9 to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-mvtz9: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-mvtz9 to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-mw5dh: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-mw5dh to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-nl4q6: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-nl4q6 to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-qn9ds: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-qn9ds to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-qr2wt: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-qr2wt to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-s2nzr: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-qsqcp" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-s7kq6: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-s7kq6 to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-scp76: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-scp76 to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-t6wwm: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-t6wwm: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-t6wwm: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:34 +0000 UTC - event for memory-reservation-wvr7w: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-8flzj" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:35 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: (combined from similar events): Created pod: memory-reservation-mqh7b Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:35 +0000 UTC - event for memory-reservation-4hnmm: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-v44nc" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:35 +0000 UTC - event for memory-reservation-6txrt: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-jz9t9" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:35 +0000 UTC - event for memory-reservation-9rc4n: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.. Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:35 +0000 UTC - event for memory-reservation-cx8b4: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-cx8b4 to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:35 +0000 UTC - event for memory-reservation-dkdmk: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-dkdmk to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:35 +0000 UTC - event for memory-reservation-jlqtg: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-xl7ln" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:35 +0000 UTC - event for memory-reservation-l4zc9: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-7fxg6" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:35 +0000 UTC - event for memory-reservation-qr54g: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.. Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:35 +0000 UTC - event for memory-reservation-r2488: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.. Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:35 +0000 UTC - event for memory-reservation-rmfvf: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.. Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:35 +0000 UTC - event for memory-reservation-rntn9: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-xrs85" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:35 +0000 UTC - event for memory-reservation-xj55n: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-xj55n to ca-minion-group-m9zb Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:36 +0000 UTC - event for memory-reservation-4d4jt: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-xhgdv" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:36 +0000 UTC - event for memory-reservation-4tqz7: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:36 +0000 UTC - event for memory-reservation-4tqz7: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:36 +0000 UTC - event for memory-reservation-hbsgj: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:36 +0000 UTC - event for memory-reservation-hbsgj: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:36 +0000 UTC - event for memory-reservation-hbsgj: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:36 +0000 UTC - event for memory-reservation-jpvmp: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-5fz9p" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:36 +0000 UTC - event for memory-reservation-mqh7b: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-6hnqg" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:36 +0000 UTC - event for memory-reservation-mzqw8: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-r9hsq" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:36 +0000 UTC - event for memory-reservation-xh6cp: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-mp6gv" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:37 +0000 UTC - event for memory-reservation-4tqz7: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:37 +0000 UTC - event for memory-reservation-7grfj: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:37 +0000 UTC - event for memory-reservation-7grfj: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:37 +0000 UTC - event for memory-reservation-94fvp: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-hcrql" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:37 +0000 UTC - event for memory-reservation-k46fz: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-8pxmt" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:37 +0000 UTC - event for memory-reservation-lzmc6: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-xhgsv" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:37 +0000 UTC - event for memory-reservation-r7rdc: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-t9pt9" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:37 +0000 UTC - event for memory-reservation-tsgp5: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-v4fr9" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:37 +0000 UTC - event for memory-reservation-zc7zl: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:37 +0000 UTC - event for memory-reservation-zc7zl: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:38 +0000 UTC - event for memory-reservation-5bqd4: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-t97br" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:38 +0000 UTC - event for memory-reservation-6k27s: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:38 +0000 UTC - event for memory-reservation-6k27s: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:38 +0000 UTC - event for memory-reservation-6k27s: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:38 +0000 UTC - event for memory-reservation-7grfj: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:38 +0000 UTC - event for memory-reservation-8dxhg: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-b7tt6" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:38 +0000 UTC - event for memory-reservation-pkp92: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-8zfp8" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:38 +0000 UTC - event for memory-reservation-rh4j6: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-h9nql" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:38 +0000 UTC - event for memory-reservation-t6r9c: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-6txwz" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:38 +0000 UTC - event for memory-reservation-zc7zl: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:39 +0000 UTC - event for memory-reservation-7pgk7: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-p84x2" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:39 +0000 UTC - event for memory-reservation-jb4jk: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-4sdzq" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:39 +0000 UTC - event for memory-reservation-kcwkf: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-vxsd9" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:39 +0000 UTC - event for memory-reservation-r2bzk: {kubelet ca-minion-group-vlq2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-kxcv2" : failed to sync configmap cache: timed out waiting for the condition Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:40 +0000 UTC - event for memory-reservation-hg5sr: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:40 +0000 UTC - event for memory-reservation-hg5sr: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.041: INFO: At 2022-12-01 06:04:41 +0000 UTC - event for memory-reservation-hg5sr: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:43 +0000 UTC - event for memory-reservation-5t92g: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:43 +0000 UTC - event for memory-reservation-5t92g: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:43 +0000 UTC - event for memory-reservation-9rc4n: {cluster-autoscaler } TriggeredScaleUp: pod triggered scale-up: [{https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-autoscaling-migs/zones/us-west1-b/instanceGroups/ca-minion-group-1 0->1 (max: 3)}] Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:43 +0000 UTC - event for memory-reservation-hzm6c: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:43 +0000 UTC - event for memory-reservation-hzm6c: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:43 +0000 UTC - event for memory-reservation-hzm6c: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:43 +0000 UTC - event for memory-reservation-qr54g: {cluster-autoscaler } TriggeredScaleUp: pod triggered scale-up: [{https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-autoscaling-migs/zones/us-west1-b/instanceGroups/ca-minion-group-1 0->1 (max: 3)}] Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:43 +0000 UTC - event for memory-reservation-r2488: {cluster-autoscaler } TriggeredScaleUp: pod triggered scale-up: [{https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-autoscaling-migs/zones/us-west1-b/instanceGroups/ca-minion-group-1 0->1 (max: 3)}] Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:43 +0000 UTC - event for memory-reservation-rmfvf: {cluster-autoscaler } TriggeredScaleUp: pod triggered scale-up: [{https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-autoscaling-migs/zones/us-west1-b/instanceGroups/ca-minion-group-1 0->1 (max: 3)}] Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:44 +0000 UTC - event for memory-reservation-5t92g: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:45 +0000 UTC - event for memory-reservation-96wnw: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:45 +0000 UTC - event for memory-reservation-96wnw: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:45 +0000 UTC - event for memory-reservation-96wnw: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:45 +0000 UTC - event for memory-reservation-fkr89: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:45 +0000 UTC - event for memory-reservation-fkr89: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:46 +0000 UTC - event for memory-reservation-f4nxn: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:46 +0000 UTC - event for memory-reservation-f4nxn: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:46 +0000 UTC - event for memory-reservation-f4nxn: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:46 +0000 UTC - event for memory-reservation-fkr89: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:46 +0000 UTC - event for memory-reservation-s2nzr: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:46 +0000 UTC - event for memory-reservation-s2nzr: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:47 +0000 UTC - event for memory-reservation-dsdkk: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:47 +0000 UTC - event for memory-reservation-dsdkk: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:47 +0000 UTC - event for memory-reservation-dsdkk: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:47 +0000 UTC - event for memory-reservation-s2nzr: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:47 +0000 UTC - event for memory-reservation-sqsv5: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:47 +0000 UTC - event for memory-reservation-sqsv5: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:47 +0000 UTC - event for memory-reservation-tcr7q: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:47 +0000 UTC - event for memory-reservation-tcr7q: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:47 +0000 UTC - event for memory-reservation-tcr7q: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:48 +0000 UTC - event for memory-reservation-ct28f: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:48 +0000 UTC - event for memory-reservation-ct28f: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:48 +0000 UTC - event for memory-reservation-gkhl6: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:48 +0000 UTC - event for memory-reservation-gkhl6: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:48 +0000 UTC - event for memory-reservation-mrg7r: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:48 +0000 UTC - event for memory-reservation-mrg7r: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:48 +0000 UTC - event for memory-reservation-mrg7r: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:48 +0000 UTC - event for memory-reservation-sqsv5: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:48 +0000 UTC - event for memory-reservation-xr627: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:48 +0000 UTC - event for memory-reservation-xr627: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:49 +0000 UTC - event for memory-reservation-6p7jq: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:49 +0000 UTC - event for memory-reservation-6p7jq: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:49 +0000 UTC - event for memory-reservation-8bnxf: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:49 +0000 UTC - event for memory-reservation-8bnxf: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:49 +0000 UTC - event for memory-reservation-ct28f: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:49 +0000 UTC - event for memory-reservation-dkbcc: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:49 +0000 UTC - event for memory-reservation-dkbcc: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:49 +0000 UTC - event for memory-reservation-dkbcc: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:49 +0000 UTC - event for memory-reservation-dtrxv: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:49 +0000 UTC - event for memory-reservation-dtrxv: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:49 +0000 UTC - event for memory-reservation-dtrxv: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:49 +0000 UTC - event for memory-reservation-gkhl6: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:49 +0000 UTC - event for memory-reservation-xr627: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:50 +0000 UTC - event for memory-reservation-6p7jq: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:50 +0000 UTC - event for memory-reservation-8bnxf: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:50 +0000 UTC - event for memory-reservation-8rgm4: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:50 +0000 UTC - event for memory-reservation-8rgm4: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:50 +0000 UTC - event for memory-reservation-8rgm4: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:50 +0000 UTC - event for memory-reservation-b99kx: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:50 +0000 UTC - event for memory-reservation-b99kx: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:50 +0000 UTC - event for memory-reservation-jfrj9: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:50 +0000 UTC - event for memory-reservation-jfrj9: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:51 +0000 UTC - event for memory-reservation-2sr99: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:51 +0000 UTC - event for memory-reservation-2sr99: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:51 +0000 UTC - event for memory-reservation-2sr99: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:51 +0000 UTC - event for memory-reservation-6vz4p: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:51 +0000 UTC - event for memory-reservation-6vz4p: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:51 +0000 UTC - event for memory-reservation-6vz4p: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:51 +0000 UTC - event for memory-reservation-b99kx: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:51 +0000 UTC - event for memory-reservation-jfrj9: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:51 +0000 UTC - event for memory-reservation-k9rms: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:51 +0000 UTC - event for memory-reservation-k9rms: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:51 +0000 UTC - event for memory-reservation-k9rms: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:51 +0000 UTC - event for memory-reservation-s7kq6: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:51 +0000 UTC - event for memory-reservation-s7kq6: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:51 +0000 UTC - event for memory-reservation-t96jb: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:51 +0000 UTC - event for memory-reservation-t96jb: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:51 +0000 UTC - event for memory-reservation-t96jb: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:51 +0000 UTC - event for memory-reservation-xh6cp: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:51 +0000 UTC - event for memory-reservation-xh6cp: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:51 +0000 UTC - event for memory-reservation-xh6cp: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-5x4k5: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-5x4k5: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-6pq22: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-6pq22: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-6rcdt: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-6rcdt: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-6tj8k: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-6tj8k: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-6tj8k: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-clq9r: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-fhp65: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-fhp65: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-fhp65: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-hprkb: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-jdjst: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-jdjst: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-jdjst: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-mvtz9: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-mvtz9: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-mw5dh: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-mw5dh: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-mw5dh: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-nl4q6: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-nl4q6: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-qn9ds: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-qn9ds: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-qn9ds: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-s7kq6: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-scp76: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:52 +0000 UTC - event for memory-reservation-scp76: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-5x4k5: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-6pq22: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-6rcdt: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-clq9r: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-clq9r: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-cx8b4: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-cx8b4: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-cx8b4: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-dkdmk: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-dkdmk: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-dkdmk: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-hdw9f: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-hdw9f: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-hdw9f: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-hprkb: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-hprkb: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-mvtz9: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-nl4q6: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-qr2wt: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-qr2wt: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-qr2wt: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.042: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-scp76: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-xj55n: {kubelet ca-minion-group-m9zb} Created: Created container memory-reservation Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-xj55n: {kubelet ca-minion-group-m9zb} Started: Started container memory-reservation Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:53 +0000 UTC - event for memory-reservation-xj55n: {kubelet ca-minion-group-m9zb} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:54 +0000 UTC - event for memory-reservation-lzmc6: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:54 +0000 UTC - event for memory-reservation-lzmc6: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:55 +0000 UTC - event for memory-reservation-5snmx: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:55 +0000 UTC - event for memory-reservation-5snmx: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:55 +0000 UTC - event for memory-reservation-5snmx: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:55 +0000 UTC - event for memory-reservation-jpvmp: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:55 +0000 UTC - event for memory-reservation-jpvmp: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:55 +0000 UTC - event for memory-reservation-lzmc6: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:55 +0000 UTC - event for memory-reservation-mqh7b: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:55 +0000 UTC - event for memory-reservation-mqh7b: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:55 +0000 UTC - event for memory-reservation-mqh7b: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:56 +0000 UTC - event for memory-reservation-4d4jt: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:56 +0000 UTC - event for memory-reservation-4d4jt: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:56 +0000 UTC - event for memory-reservation-4d4jt: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:56 +0000 UTC - event for memory-reservation-4zdfz: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:56 +0000 UTC - event for memory-reservation-4zdfz: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:56 +0000 UTC - event for memory-reservation-8dxhg: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.046: INFO: At 2022-12-01 06:04:56 +0000 UTC - event for memory-reservation-8dxhg: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:56 +0000 UTC - event for memory-reservation-jpvmp: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:57 +0000 UTC - event for memory-reservation-4zdfz: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:57 +0000 UTC - event for memory-reservation-7pgk7: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:57 +0000 UTC - event for memory-reservation-7pgk7: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:57 +0000 UTC - event for memory-reservation-7pgk7: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:57 +0000 UTC - event for memory-reservation-8dxhg: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:57 +0000 UTC - event for memory-reservation-gdwb8: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:57 +0000 UTC - event for memory-reservation-gdwb8: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:57 +0000 UTC - event for memory-reservation-kcwkf: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:57 +0000 UTC - event for memory-reservation-kcwkf: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:57 +0000 UTC - event for memory-reservation-r2bzk: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:57 +0000 UTC - event for memory-reservation-r2bzk: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:57 +0000 UTC - event for memory-reservation-rh4j6: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:57 +0000 UTC - event for memory-reservation-rh4j6: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:57 +0000 UTC - event for memory-reservation-rh4j6: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:57 +0000 UTC - event for memory-reservation-thbxc: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:57 +0000 UTC - event for memory-reservation-thbxc: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-2n4z2: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-2n4z2: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-94fvp: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.050: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-94fvp: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-gd8pp: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-gd8pp: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-gdwb8: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-jb4jk: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-jb4jk: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-k46fz: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-k46fz: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-kcwkf: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-lfl54: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-lfl54: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-pkp92: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-r2bzk: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-thbxc: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-vmhf6: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-vmhf6: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:58 +0000 UTC - event for memory-reservation-wvr7w: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-4hnmm: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-77dxd: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-77dxd: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-9fcnw: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.056: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-bzmrx: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.059: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-bzmrx: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.059: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-d47bl: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.059: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-j8l8d: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.059: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-j8l8d: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.059: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-lfl54: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.059: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-pkp92: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.059: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-r7rdc: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.059: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-rntn9: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.059: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-t6r9c: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.059: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-t6r9c: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.059: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-tsgp5: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.059: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-tsgp5: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.059: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-wvr7w: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.059: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-zv552: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.059: INFO: At 2022-12-01 06:04:59 +0000 UTC - event for memory-reservation-zv552: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.059: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-2n4z2: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.059: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-4hnmm: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.059: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-5bqd4: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.059: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-5bqd4: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.059: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-6txrt: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.059: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-6txrt: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-77dxd: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-84pcv: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-84pcv: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-8tpbq: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-8tpbq: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-94fvp: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-9fcnw: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-9r4qk: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-9r4qk: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-bzmrx: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-cdwtq: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-cdwtq: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-d47bl: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-fr6w8: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-fr6w8: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-gd8pp: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-j8l8d: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-jb4jk: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-jlqtg: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-jlqtg: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-k46fz: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.064: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-klw2n: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.068: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-klw2n: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.068: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-l4zc9: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.068: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-l4zc9: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.068: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-mzqw8: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.068: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-mzqw8: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.068: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-pkp92: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.068: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-r7rdc: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.068: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-rntn9: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.068: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-t6r9c: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.068: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-tsgp5: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.068: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-vmhf6: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.068: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-wmmqt: {kubelet ca-minion-group-vlq2} Created: Created container memory-reservation Dec 1 06:10:00.068: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-wmmqt: {kubelet ca-minion-group-vlq2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.068: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-wvr7w: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.068: INFO: At 2022-12-01 06:05:00 +0000 UTC - event for memory-reservation-zv552: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.068: INFO: At 2022-12-01 06:05:01 +0000 UTC - event for memory-reservation-4hnmm: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.068: INFO: At 2022-12-01 06:05:01 +0000 UTC - event for memory-reservation-5bqd4: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.068: INFO: At 2022-12-01 06:05:01 +0000 UTC - event for memory-reservation-6txrt: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.068: INFO: At 2022-12-01 06:05:01 +0000 UTC - event for memory-reservation-84pcv: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.069: INFO: At 2022-12-01 06:05:01 +0000 UTC - event for memory-reservation-8tpbq: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.069: INFO: At 2022-12-01 06:05:01 +0000 UTC - event for memory-reservation-9fcnw: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.069: INFO: At 2022-12-01 06:05:01 +0000 UTC - event for memory-reservation-9r4qk: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.073: INFO: At 2022-12-01 06:05:01 +0000 UTC - event for memory-reservation-cdwtq: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.073: INFO: At 2022-12-01 06:05:01 +0000 UTC - event for memory-reservation-d47bl: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.073: INFO: At 2022-12-01 06:05:01 +0000 UTC - event for memory-reservation-fr6w8: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.073: INFO: At 2022-12-01 06:05:01 +0000 UTC - event for memory-reservation-jlqtg: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.073: INFO: At 2022-12-01 06:05:01 +0000 UTC - event for memory-reservation-klw2n: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.073: INFO: At 2022-12-01 06:05:01 +0000 UTC - event for memory-reservation-l4zc9: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.073: INFO: At 2022-12-01 06:05:01 +0000 UTC - event for memory-reservation-mzqw8: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.073: INFO: At 2022-12-01 06:05:01 +0000 UTC - event for memory-reservation-r7rdc: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.073: INFO: At 2022-12-01 06:05:01 +0000 UTC - event for memory-reservation-rntn9: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.073: INFO: At 2022-12-01 06:05:01 +0000 UTC - event for memory-reservation-wmmqt: {kubelet ca-minion-group-vlq2} Started: Started container memory-reservation Dec 1 06:10:00.073: INFO: At 2022-12-01 06:06:00 +0000 UTC - event for memory-reservation-9rc4n: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-9rc4n to ca-minion-group-1-qrkr Dec 1 06:10:00.073: INFO: At 2022-12-01 06:06:00 +0000 UTC - event for memory-reservation-qr54g: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-qr54g to ca-minion-group-1-qrkr Dec 1 06:10:00.073: INFO: At 2022-12-01 06:06:00 +0000 UTC - event for memory-reservation-r2488: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-r2488 to ca-minion-group-1-qrkr Dec 1 06:10:00.073: INFO: At 2022-12-01 06:06:00 +0000 UTC - event for memory-reservation-rmfvf: {default-scheduler } Scheduled: Successfully assigned autoscaling-1275/memory-reservation-rmfvf to ca-minion-group-1-qrkr Dec 1 06:10:00.073: INFO: At 2022-12-01 06:06:02 +0000 UTC - event for memory-reservation-9rc4n: {kubelet ca-minion-group-1-qrkr} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.073: INFO: At 2022-12-01 06:06:02 +0000 UTC - event for memory-reservation-9rc4n: {kubelet ca-minion-group-1-qrkr} Created: Created container memory-reservation Dec 1 06:10:00.073: INFO: At 2022-12-01 06:06:02 +0000 UTC - event for memory-reservation-9rc4n: {kubelet ca-minion-group-1-qrkr} Started: Started container memory-reservation Dec 1 06:10:00.073: INFO: At 2022-12-01 06:06:02 +0000 UTC - event for memory-reservation-qr54g: {kubelet ca-minion-group-1-qrkr} Started: Started container memory-reservation Dec 1 06:10:00.073: INFO: At 2022-12-01 06:06:02 +0000 UTC - event for memory-reservation-qr54g: {kubelet ca-minion-group-1-qrkr} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.073: INFO: At 2022-12-01 06:06:02 +0000 UTC - event for memory-reservation-qr54g: {kubelet ca-minion-group-1-qrkr} Created: Created container memory-reservation Dec 1 06:10:00.073: INFO: At 2022-12-01 06:06:02 +0000 UTC - event for memory-reservation-r2488: {kubelet ca-minion-group-1-qrkr} Started: Started container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:02 +0000 UTC - event for memory-reservation-r2488: {kubelet ca-minion-group-1-qrkr} Created: Created container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:02 +0000 UTC - event for memory-reservation-r2488: {kubelet ca-minion-group-1-qrkr} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:02 +0000 UTC - event for memory-reservation-rmfvf: {kubelet ca-minion-group-1-qrkr} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:02 +0000 UTC - event for memory-reservation-rmfvf: {kubelet ca-minion-group-1-qrkr} Created: Created container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:02 +0000 UTC - event for memory-reservation-rmfvf: {kubelet ca-minion-group-1-qrkr} Started: Started container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-4zdfz: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-6rcdt: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-7pgk7: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-84pcv: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-8tpbq: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-clq9r: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-dtrxv: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-gkhl6: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-hbsgj: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-hg5sr: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-hprkb: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-hzm6c: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-jb4jk: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-k9rms: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-kcwkf: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-mw5dh: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.078: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-nl4q6: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-qr2wt: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-rh4j6: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-rntn9: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-s7kq6: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-tcr7q: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:52 +0000 UTC - event for memory-reservation-zc7zl: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-2sr99: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-4tqz7: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-5bqd4: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-6pq22: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-77dxd: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-94fvp: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-96wnw: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-dkdmk: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-dsdkk: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-fkr89: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-hdw9f: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-jfrj9: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-lfl54: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-mrg7r: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-mvtz9: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-pkp92: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.082: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-qr54g: {kubelet ca-minion-group-1-qrkr} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-r2bzk: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-rmfvf: {kubelet ca-minion-group-1-qrkr} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:53 +0000 UTC - event for memory-reservation-t6r9c: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-2n4z2: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-4d4jt: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-6p7jq: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-6vz4p: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-7grfj: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-bzmrx: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-cdwtq: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-ct28f: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-cx8b4: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-d47bl: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-f4nxn: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-gdwb8: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-jpvmp: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-k46fz: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-mzqw8: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-r2488: {kubelet ca-minion-group-1-qrkr} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-s2nzr: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-wmmqt: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-wvr7w: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.086: INFO: At 2022-12-01 06:06:54 +0000 UTC - event for memory-reservation-xj55n: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-5snmx: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-5t92g: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-5x4k5: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-6tj8k: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-6txrt: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-8bnxf: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-8dxhg: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-b99kx: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-gd8pp: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-j8l8d: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-jlqtg: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-l4zc9: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-mqh7b: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-r7rdc: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-sqsv5: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-thbxc: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-tsgp5: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-vmhf6: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-xh6cp: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:55 +0000 UTC - event for memory-reservation-xr627: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:56 +0000 UTC - event for memory-reservation-4hnmm: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:56 +0000 UTC - event for memory-reservation-6k27s: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.091: INFO: At 2022-12-01 06:06:56 +0000 UTC - event for memory-reservation-8rgm4: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.095: INFO: At 2022-12-01 06:06:56 +0000 UTC - event for memory-reservation-9fcnw: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.095: INFO: At 2022-12-01 06:06:56 +0000 UTC - event for memory-reservation-9r4qk: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.095: INFO: At 2022-12-01 06:06:56 +0000 UTC - event for memory-reservation-9rc4n: {kubelet ca-minion-group-1-qrkr} Killing: Stopping container memory-reservation Dec 1 06:10:00.095: INFO: At 2022-12-01 06:06:56 +0000 UTC - event for memory-reservation-dkbcc: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.095: INFO: At 2022-12-01 06:06:56 +0000 UTC - event for memory-reservation-fhp65: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.095: INFO: At 2022-12-01 06:06:56 +0000 UTC - event for memory-reservation-fr6w8: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.095: INFO: At 2022-12-01 06:06:56 +0000 UTC - event for memory-reservation-jdjst: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.095: INFO: At 2022-12-01 06:06:56 +0000 UTC - event for memory-reservation-klw2n: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.095: INFO: At 2022-12-01 06:06:56 +0000 UTC - event for memory-reservation-lzmc6: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.095: INFO: At 2022-12-01 06:06:56 +0000 UTC - event for memory-reservation-qn9ds: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.095: INFO: At 2022-12-01 06:06:56 +0000 UTC - event for memory-reservation-scp76: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.095: INFO: At 2022-12-01 06:06:56 +0000 UTC - event for memory-reservation-t6wwm: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.095: INFO: At 2022-12-01 06:06:56 +0000 UTC - event for memory-reservation-t96jb: {kubelet ca-minion-group-m9zb} Killing: Stopping container memory-reservation Dec 1 06:10:00.095: INFO: At 2022-12-01 06:06:56 +0000 UTC - event for memory-reservation-zv552: {kubelet ca-minion-group-vlq2} Killing: Stopping container memory-reservation Dec 1 06:10:00.138: INFO: POD NODE PHASE GRACE CONDITIONS Dec 1 06:10:00.138: INFO: Dec 1 06:10:00.184: INFO: Logging node info for node ca-master Dec 1 06:10:00.227: INFO: Node Info: &Node{ObjectMeta:{ca-master a2126acf-72e0-4c73-a9ef-ce1238132582 19059 0 2022-12-01 04:35:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 04:35:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-12-01 04:35:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-12-01 04:35:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-12-01 06:07:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 04:35:50 +0000 UTC,LastTransitionTime:2022-12-01 04:35:50 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 06:07:38 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 06:07:38 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 06:07:38 +0000 UTC,LastTransitionTime:2022-12-01 04:35:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 06:07:38 +0000 UTC,LastTransitionTime:2022-12-01 04:35:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.105.118.216,},NodeAddress{Type:InternalDNS,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bac070a384fefb9133ee7878e15673cf,SystemUUID:bac070a3-84fe-fb91-33ee-7878e15673cf,BootID:39b8786f-3724-43ea-9f9b-9333f7876ff8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/autoscaling/cluster-autoscaler@sha256:07ab8c89cd0ad296ddb6347febe196d8fe0e1c757656a98f71199860d87cf1a5 registry.k8s.io/autoscaling/cluster-autoscaler:v1.22.0],SizeBytes:24220268,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 06:10:00.228: INFO: Logging kubelet events for node ca-master Dec 1 06:10:00.273: INFO: Logging pods the kubelet thinks is on node ca-master Dec 1 06:10:00.354: INFO: kube-apiserver-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 06:10:00.354: INFO: Container kube-apiserver ready: true, restart count 0 Dec 1 06:10:00.354: INFO: kube-controller-manager-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 06:10:00.354: INFO: Container kube-controller-manager ready: true, restart count 1 Dec 1 06:10:00.354: INFO: etcd-server-events-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 06:10:00.354: INFO: Container etcd-container ready: true, restart count 0 Dec 1 06:10:00.354: INFO: etcd-server-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 06:10:00.354: INFO: Container etcd-container ready: true, restart count 0 Dec 1 06:10:00.354: INFO: kube-addon-manager-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 06:10:00.354: INFO: Container kube-addon-manager ready: true, restart count 0 Dec 1 06:10:00.354: INFO: cluster-autoscaler-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 06:10:00.354: INFO: Container cluster-autoscaler ready: true, restart count 2 Dec 1 06:10:00.354: INFO: l7-lb-controller-ca-master started at 2022-12-01 04:35:03 +0000 UTC (0+1 container statuses recorded) Dec 1 06:10:00.354: INFO: Container l7-lb-controller ready: true, restart count 2 Dec 1 06:10:00.354: INFO: konnectivity-server-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 06:10:00.354: INFO: Container konnectivity-server-container ready: true, restart count 0 Dec 1 06:10:00.354: INFO: kube-scheduler-ca-master started at 2022-12-01 04:34:44 +0000 UTC (0+1 container statuses recorded) Dec 1 06:10:00.354: INFO: Container kube-scheduler ready: true, restart count 0 Dec 1 06:10:00.354: INFO: metadata-proxy-v0.1-4rrgr started at 2022-12-01 04:35:35 +0000 UTC (0+2 container statuses recorded) Dec 1 06:10:00.354: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 06:10:00.354: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 06:10:00.565: INFO: Latency metrics for node ca-master Dec 1 06:10:00.565: INFO: Logging node info for node ca-minion-group-m9zb Dec 1 06:10:00.608: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-m9zb 4c854cc9-5b08-4d5d-9b2d-526b4e3cf882 18945 0 2022-12-01 05:20:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-m9zb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 05:20:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-01 05:21:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.15.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-12-01 05:21:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {e2e.test Update v1 2022-12-01 05:58:26 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}} } {node-problem-detector Update v1 2022-12-01 06:06:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-01 06:07:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.15.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-m9zb,Unschedulable:false,Taints:[]Taint{Taint{Key:DeletionCandidateOfClusterAutoscaler,Value:1669872061,Effect:PreferNoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.15.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-12-01 06:06:02 +0000 UTC,LastTransitionTime:2022-12-01 05:20:56 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-12-01 06:06:02 +0000 UTC,LastTransitionTime:2022-12-01 05:20:56 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-12-01 06:06:02 +0000 UTC,LastTransitionTime:2022-12-01 05:20:56 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-12-01 06:06:02 +0000 UTC,LastTransitionTime:2022-12-01 05:20:56 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-12-01 06:06:02 +0000 UTC,LastTransitionTime:2022-12-01 05:20:56 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-12-01 06:06:02 +0000 UTC,LastTransitionTime:2022-12-01 05:20:56 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-12-01 06:06:02 +0000 UTC,LastTransitionTime:2022-12-01 05:20:56 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 05:21:02 +0000 UTC,LastTransitionTime:2022-12-01 05:21:02 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 06:07:19 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 06:07:19 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 06:07:19 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 06:07:19 +0000 UTC,LastTransitionTime:2022-12-01 05:20:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.17,},NodeAddress{Type:ExternalIP,Address:34.168.125.36,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-m9zb.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-m9zb.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4798b12b4695449fe4795c73cdd4e8ab,SystemUUID:4798b12b-4695-449f-e479-5c73cdd4e8ab,BootID:b9b728c7-5e8b-4849-a085-242c6074e4ad,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 06:10:00.609: INFO: Logging kubelet events for node ca-minion-group-m9zb Dec 1 06:10:00.654: INFO: Logging pods the kubelet thinks is on node ca-minion-group-m9zb Dec 1 06:10:00.714: INFO: kube-proxy-ca-minion-group-m9zb started at 2022-12-01 05:20:51 +0000 UTC (0+1 container statuses recorded) Dec 1 06:10:00.714: INFO: Container kube-proxy ready: true, restart count 0 Dec 1 06:10:00.714: INFO: metadata-proxy-v0.1-7cpg9 started at 2022-12-01 05:20:52 +0000 UTC (0+2 container statuses recorded) Dec 1 06:10:00.714: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 06:10:00.714: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 06:10:00.714: INFO: konnectivity-agent-kmdrk started at 2022-12-01 05:21:02 +0000 UTC (0+1 container statuses recorded) Dec 1 06:10:00.714: INFO: Container konnectivity-agent ready: true, restart count 0 Dec 1 06:10:00.924: INFO: Latency metrics for node ca-minion-group-m9zb Dec 1 06:10:00.924: INFO: Logging node info for node ca-minion-group-vlq2 Dec 1 06:10:00.967: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-vlq2 132befc3-8b36-49c3-8aee-8af679afd99a 18382 0 2022-12-01 05:20:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-vlq2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-01 05:20:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-01 05:21:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.14.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-12-01 05:21:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-12-01 06:05:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-12-01 06:06:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.14.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-vlq2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.14.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-12-01 06:06:00 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-12-01 06:06:00 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-12-01 06:06:00 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-12-01 06:06:00 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-12-01 06:06:00 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-12-01 06:06:00 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-12-01 06:06:00 +0000 UTC,LastTransitionTime:2022-12-01 05:20:55 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-01 05:21:02 +0000 UTC,LastTransitionTime:2022-12-01 05:21:02 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-01 06:05:15 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-01 06:05:15 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-01 06:05:15 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-01 06:05:15 +0000 UTC,LastTransitionTime:2022-12-01 05:20:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.16,},NodeAddress{Type:ExternalIP,Address:35.227.188.214,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-vlq2.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-vlq2.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:240e9092aae0ae79fa5461368e619ce5,SystemUUID:240e9092-aae0-ae79-fa54-61368e619ce5,BootID:02338211-5cf4-4ba4-bf8a-82e73c605696,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.54+79cba170b55bd0,KubeProxyVersion:v1.27.0-alpha.0.54+79cba170b55bd0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.54_79cba170b55bd0],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 1 06:10:00.968: INFO: Logging kubelet events for node ca-minion-group-vlq2 Dec 1 06:10:01.013: INFO: Logging pods the kubelet thinks is on node ca-minion-group-vlq2 Dec 1 06:10:01.081: INFO: konnectivity-agent-x9vdq started at 2022-12-01 05:21:02 +0000 UTC (0+1 container statuses recorded) Dec 1 06:10:01.081: INFO: Container konnectivity-agent ready: true, restart count 0 Dec 1 06:10:01.081: INFO: volume-snapshot-controller-0 started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 06:10:01.081: INFO: Container volume-snapshot-controller ready: true, restart count 0 Dec 1 06:10:01.081: INFO: metadata-proxy-v0.1-mvw84 started at 2022-12-01 05:20:52 +0000 UTC (0+2 container statuses recorded) Dec 1 06:10:01.081: INFO: Container metadata-proxy ready: true, restart count 0 Dec 1 06:10:01.081: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Dec 1 06:10:01.081: INFO: l7-default-backend-8549d69d99-n8nmc started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 06:10:01.081: INFO: Container default-http-backend ready: true, restart count 0 Dec 1 06:10:01.081: INFO: metrics-server-v0.5.2-867b8754b9-pmk4k started at 2022-12-01 05:30:20 +0000 UTC (0+2 container statuses recorded) Dec 1 06:10:01.081: INFO: Container metrics-server ready: true, restart count 1 Dec 1 06:10:01.081: INFO: Container metrics-server-nanny ready: true, restart count 0 Dec 1 06:10:01.081: INFO: kube-proxy-ca-minion-group-vlq2 started at 2022-12-01 05:20:51 +0000 UTC (0+1 container statuses recorded) Dec 1 06:10:01.081: INFO: Container kube-proxy ready: true, restart count 0 Dec 1 06:10:01.081: INFO: coredns-6d97d5ddb-gpg9p started at 2022-12-01 05:49:04 +0000 UTC (0+1 container statuses recorded) Dec 1 06:10:01.081: INFO: Container coredns ready: true, restart count 0 Dec 1 06:10:01.272: INFO: Latency metrics for node ca-minion-group-vlq2 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "autoscaling-1275" for this suite. 12/01/22 06:10:01.272
Filter through log files | View test history on testgrid
error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Feature:ClusterSizeAutoscalingScaleUp\]|\[Feature:ClusterSizeAutoscalingScaleDown\] --ginkgo.skip=\[Flaky\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true --cluster-ip-range=10.64.0.0/14: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [ReportAfterSuite] Kubernetes e2e suite report
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest Extract
kubetest GetDeployer
kubetest IsUp
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest kubectl version
kubetest list nodes
kubetest test setup
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (fairness)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (priority)
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [Flaky] kubectl explain works for CR with the same resource name as built-in object.
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST NOT fail validation for create of a custom resource that satisfies the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains a x-kubernetes-validations rule that refers to a property that do not exist
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that contains a syntax error
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that exceeds the estimated cost limit
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource that exceeds the runtime cost limit for x-kubernetes-validations rule execution
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail update of a custom resource that does not satisfy a x-kubernetes-validations transition rule
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail validation for create of a custom resource that does not satisfy the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should accurately determine present and missing resources
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply a finalizer to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply an update to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should apply changes to a resourcequota status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should manage the lifecycle of a ResourceQuota [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for CRDs
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for subresources
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [It] [sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [It] [sig-api-machinery] health handlers should contain necessary checks
Kubernetes e2e suite [It] [sig-api-machinery] kube-apiserver identity [Feature:APIServerIdentity] kube-apiserver identity should persist after restart [Disruptive]
Kubernetes e2e suite [It] [sig-api-machinery] server version should find the server version [Conformance]
Kubernetes e2e suite [It] [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should be able to schedule after more than 100 missed schedule
Kubernetes e2e suite [It] [sig-apps] CronJob should delete failed finished jobs with limit of one job
Kubernetes e2e suite [It] [sig-apps] CronJob should delete successful finished jobs with limit of one successful job
Kubernetes e2e suite [It] [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [It] [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support CronJob API operations [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support timezone
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should surge pods onto nodes when spec was updated and update strategy is RollingUpdate
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kube-proxy should recover after being killed accidentally
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [It] [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [It] [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [It] [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [It] [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
Kubernetes e2e suite [It] [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore DisruptionTarget condition
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore exit code 137
Kubernetes e2e suite [It] [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy on exit code to fail the job early
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy to not count the failure towards the backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should apply changes to a job status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete pods when suspended
Kubernetes e2e suite [It] [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [It] [sig-apps] Job should manage the lifecycle of a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should not create pods when created in suspend state
Kubernetes e2e suite [It] [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion with CPU requests [Serial]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should get and update a ReplicationController scale [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet AvailableReplicas should get updated accordingly when MinReadySeconds is enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet MinReadySeconds should be honored when enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenDeleted)
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenScaled) [Feature:StatefulSetAutoDeletePVC]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a OnScaledown policy
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a WhenDeleted policy
Kubernetes e2e suite [It] [sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [It] [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR
Kubernetes e2e suite [It] [sig-auth] SelfSubjectReview [Feature:APISelfSubjectReview] should support SelfSubjectReview API operations
Kubernetes e2e suite [It] [sig-auth] ServiceAccount admission controller migration [Feature:BoundServiceAccountTokenVolume] master upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts no secret-based service account token should be auto-generated
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should update a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) CustomResourceDefinition Should scale with a CRD targetRef
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light [Slow] Should scale from 2 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods on a busy application with an idle sidecar container
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale down
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale up
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with both scale up and down controls configured should keep recommendation within the range over two stabilization windows
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with both scale up and down controls configured should keep recommendation within the range with stabilization window and pod limit rate
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with long upscale stabilization window should scale up only after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale down no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale up no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale down no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale up no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with short downscale stabilization window should scale down soon after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down to 0
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down with Prometheus
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target average value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Container Resource and External Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Pod and Object Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Pod and External metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Resource and Object metrics)
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply apply set/view last-applied
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl events should show event when pod is created
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply an invalid/valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields in both the root and embedded object of a CR
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields of a typed object
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should contain last line of the log
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should handle in-cluster config
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command with --leave-stdin-open
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never, but with --rm
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes running a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec using resource/name
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support inline execution and attach
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support port-forward
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Addon update should propagate add-on file changes [Slow]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas different zones [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas multizone workers [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas same zone [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to add nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to delete nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to cadvisor port 4194 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to the readonly kubelet port 10255 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 10255 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 4194 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Recreate [Feature:Recreate] recreate nodes and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] [Disruptive]NodeLease NodeLease deletion node lease should be deleted when corresponding node is deleted
Kubernetes e2e suite [It] [sig-cloud-provider] [Feature:CloudProvider][Disruptive] Nodes should be deleted on API server if it doesn't exist in the cloud provider
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
Kubernetes e2e suite [It] [sig-instrumentation] Events API should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should manage the lifecycle of an event [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from API server.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have accelerator metrics [Feature:StackdriverAcceleratorMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for external metrics [Feature:StackdriverExternalMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for new resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for old resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Stackdriver Metadata Agent [Feature:StackdriverMetadataAgent]
Kubernetes e2e suite [It] [sig-network] CVE-2021-29923 IPv4 Service Type ClusterIP with leading zeros should work interpreted as decimal
Kubernetes e2e suite [It] [sig-network] ClusterDns [Feature:Example] should create pod that uses dns
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service
Kubernetes e2e suite [It] [sig-network] Conntrack should drop INVALID conntrack entries [Privileged]
Kubernetes e2e suite [It] [sig-network] DNS HostNetwork should resolve DNS of partial qualified names for services on hostNetwork pods with dnsPolicy: ClusterFirstWithHostNet [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for ExternalName services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Hostname [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Provider:GCE]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod DNS nameservers [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod resolv.conf
Kubernetes e2e suite [It] [sig-network] DNS should work with the pod containing more than 6 DNS search paths and longer than 256 search list characters
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]
Kubernetes e2e suite [It] [sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
Kubernetes e2e suite [It] [sig-network] Firewall rule control plane should not expose well-known ports
Kubernetes e2e suite [It] [sig-network] Firewall rule should have correct firewall rules for e2e cluster
Kubernetes e2e suite [It] [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Ingress API should support creating Ingress API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should allow IngressClass to have Namespace-scoped parameters [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should choose the one with the later CreationTimestamp, if equal the one with the lower name when two ingressClasses are marked as default[Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should not set default value if no default IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should set default value on new IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=NodePort
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work from pods
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create LoadBalancer Service without NodePort and change it [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to preserve UDP traffic when server pod cycles for a LoadBalancer service on different nodes
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to preserve UDP traffic when server pod cycles for a LoadBalancer service on the same nodes
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP off [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP on [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should reconcile LB health check interval [Slow][Serial][Disruptive]
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] rolling update backend pods should not cause service disruption
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to create a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to switch between IG and NEG modes
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should create NEGs for all ports with the Ingress annotation, and NEGs for the standalone annotation otherwise
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints for both Ingress-referenced NEG and standalone NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints to NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 Scalability GCE [Slow] [Serial] [Feature:IngressScale] Creating and updating ingresses should happen promptly with small/medium/large amount of ingresses
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API with endport field
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy with Status subresource [Feature:NetworkPolicyStatus]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from all pods in a namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from pods based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress from pods on other namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce ingress policy allowing any port traffic to a server on a specific protocol [Feature:NetworkPolicy] [Feature:UDP]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Multiple PodSelectors and NamespaceSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on any PodSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic for a target [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic from pods in all namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic based on NamespaceSelector with MatchLabels using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not allow access by TCP when a policy specifies only UDP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not mistakenly treat 'protocol: SCTP' as 'protocol: TCP', even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should properly isolate pods that are selected by a policy allowing SCTP, even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support denying of egress traffic on the client side (even if the server explicitly allows this traffic) [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should work with Ingress, Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]
Kubernetes e2e suite [It] [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2
Kubernetes e2e suite [It] [sig-network] Networking should check kube-proxy urls
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental][LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
Kubernetes e2e suite [It] [sig-network] Networking should provider Internet connection for containers using DNS [Feature:Networking-DNS]
Kubernetes e2e suite [It] [sig-network] Networking should recreate its iptables rules if they are deleted [Disruptive]
Kubernetes e2e suite [It] [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
Kubernetes e2e suite [It] [sig-network] SCTP [LinuxOnly] should allow creating a basic SCTP service with pod and endpoints
Kubernetes e2e suite [It] [sig-network] SCTP [LinuxOnly] should create a ClusterIP Service with SCTP ports
Kubernetes e2e suite [It] [sig-network] SCTP [LinuxOnly] should create a Pod with SCTP HostPort
Kubernetes e2e suite [It] [sig-network] Service endpoints latency should not be very high [Conformance]
Kubernetes e2e suite [It] [sig-network] Services GCE [Slow] should be able to create and tear down a standard-tier load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] Services should allow pods to hairpin back to themselves through services
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true
Kubernetes e2e suite [It] [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to up and down services
Kubernetes e2e suite [It] [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols
Kubernetes e2e suite [It] [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
Kubernetes e2e suite [It] [sig-network] Services should be rejected for evicted pods (no endpoints exist)
Kubernetes e2e suite [It] [sig-network] Services should be rejected when no endpoints exist
Kubernetes e2e suite [It] [sig-network] Services should be updated after adding or deleting ports
Kubernetes e2e suite [It] [sig-network] Services should check NodePort out-of-range
Kubernetes e2e suite [It] [sig-network] Services should complete a service status lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should create endpoints for unready pods
Kubernetes e2e suite [It] [sig-network] Services should delete a collection of services [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should fail health check node port if there are only terminating endpoints [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with externalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with externallTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should find a service from listing all namespaces [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/headless
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/service-proxy-name
Kubernetes e2e suite [It] [sig-network] Services should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false
Kubernetes e2e suite [It] [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should prevent NodePort collisions
Kubernetes e2e suite [It] [sig-network] Services should provide secure master service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should release NodePorts on delete
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod (hostNetwork: true) to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should serve a basic endpoint from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should serve endpoints on same port and different protocol for internal traffic on Type LoadBalancer
Kubernetes e2e suite [It] [sig-network] Services should serve multiport endpoints from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should test the lifecycle of an Endpoint [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting apiserver [Disruptive]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting kube-proxy [Disruptive]
Kubernetes e2e suite [It] [sig-network] Services should work after the service has been recreated
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should be able to reach pod on ipv4 and ipv6 ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create a single stack service with cluster ip from primary service range
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create pod, add ipv6 and ipv4 ip to pod ips
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4,v6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6,v4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should have ipv4 and ipv6 internal node ip
Kubernetes e2e suite [It] [sig-network] [Feature:PerformanceDNS][Serial] Should answer DNS query for maximum number of services per cluster
Kubernetes e2e suite [It] [sig-network] [Feature:Topology Hints] should distribute endpoints evenly
Kubernetes e2e suite [It] [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Downgrade kube-proxy from a DaemonSet to static pods should maintain a functioning cluster [Feature:KubeProxyDaemonSetDowngrade]
Kubernetes e2e suite [It] [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Upgrade kube-proxy from static pods to a DaemonSet should maintain a functioning cluster [Feature:KubeProxyDaemonSetUpgrade]
Kubernetes e2e suite [It] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined
Kubernetes e2e suite [It] [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should update ConfigMap successfully
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports external claim referenced by multiple containers of multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports external claim referenced by multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports init containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports inline claim referenced by multiple containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports simple pod referencing external resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports simple pod referencing inline resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports external claim referenced by multiple containers of multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports external claim referenced by multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports init containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports inline claim referenced by multiple containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports simple pod referencing external resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports simple pod referencing inline resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] driver supports claim and class parameters
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must not run a pod if a claim is not reserved for it
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must retry NodePrepareResource
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must unprepare resources for force-deleted pod
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet registers plugin
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple drivers work
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes reallocation works
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with network-attached resources schedules onto different nodes
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with node-local resources with delayed allocation uses all resources
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with node-local resources with immediate allocation uses all resources
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide default limits.hugepages-<pagesize> from node allocatable
Kubernetes e2e suite [It] [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Ephemeral Containers [NodeConformance] will start an ephemeral container in an existing pod [Conformance]
Kubernetes e2e suite [It] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] experimental resource usage tracking [Feature:ExperimentalResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 0 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling an agnhost Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Lease lease API should be available [Conformance]
Kubernetes e2e suite [It] [sig-node] Mount propagation should propagate mounts within defined scopes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] pods evicted from tainted nodes have pod disruption condition
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease should have OwnerReferences set
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should report node status infrequently
Kubernetes e2e suite [It] [sig-node] NodeProblemDetector should run without error
Kubernetes e2e suite [It] [sig-node] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods
Kubernetes e2e suite [It] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS
Kubernetes e2e suite [It] [sig-node] PodTemplates should delete a collection of pod templates [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should replace a pod template [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods Extended Delete Grace Period should be submitted and removed
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container Status should never report container start when an init container fails
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container Status should never report success for a pending container
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container lifecycle evicted pods should be terminal
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done
Kubernetes e2e suite [It] [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should delete a collection of pods [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should get a host IP [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should patch a pod status [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support pod readiness gates [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process
Kubernetes e2e suite [It] [sig-node] PreStop should call prestop when killing a pod [Conformance]
Kubernetes e2e suite [It] [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be ready immediately after startupProbe succeeds
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted by liveness probe after startup probe enables it
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted startup probe fails
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a local redirect http liveness probe
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating
Kubernetes e2e suite [It] [sig-node] Probing container should mark readiness on pods to false while pod is in progress of terminating when a pod has a readiness probe
Kubernetes e2e suite [It] [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling with taints [Serial]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] SSH should SSH to all nodes and run commands
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should patch a secret [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers must create the user namespace if set to false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers must not create the user namespace if set to true [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers should mount all volumes with proper permissions with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers should set FSGroup to user inside the container with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling when using hostIPC [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling when using hostPID [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context when if the container's primary UID belongs to some groups in the image [LinuxOnly] should add pod.Spec.SecurityContext.SupplementalGroups to them [LinuxOnly] in resultant supplementary groups for the container processes
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls with slashes as separator [MinimumKubeletVersion:1.23]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Secret should create a pod that reads a secret
Kubernetes e2e suite [It] [sig-node] crictl should be able to run crictl on the node
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster downgrade should be able to run gpu pod after downgrade [Feature:GPUClusterDowngrade]
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster upgrade should be able to run gpu pod after upgrade [Feature:GPUClusterUpgrade]
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] master upgrade should NOT disrupt gpu pod [Feature:GPUMasterUpgrade]
Kubernetes e2e suite [It] [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
Kubernetes e2e suite [It] [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (active) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [It] [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (sleeping) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [It] [sig-node] kubelet kubectl node-logs <node-name> [Feature:add node log viewer] should return the logs
Kubernetes e2e suite [It] [sig-node] kubelet kubectl node-logs <node-name> [Feature:add node log viewer] should return the logs for the provided path
Kubernetes e2e suite [It] [sig-node] kubelet kubectl node-logs <node-name> [Feature:add node log viewer] should return the logs for the requested service
Kubernetes e2e suite [It] [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] run Nvidia GPU Device Plugin tests with a recreation
Kubernetes e2e suite [It] [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] LimitRange should list, patch and delete a LimitRange by collection [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Serial]
Kubernetes e2e suite [It] [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Serial]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates Pods with non-empty schedulingGates are blocked on scheduling [Feature:PodSchedulingReadiness] [alpha]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates pod disruption condition is added to the preempted pod
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed
Kubernetes e2e suite [It] [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes