go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sShould\sbe\sable\sto\sscale\sa\snode\sgroup\sdown\sto\s0\[Feature\:ClusterSizeAutoscalingScaleDown\]$'
test/e2e/autoscaling/cluster_size_autoscaling.go:868 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 +0x429 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 +0x95from junit_01.xml
[BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/30/22 13:17:31.454 Nov 30 13:17:31.454: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename autoscaling 11/30/22 13:17:31.456 STEP: Waiting for a default service account to be provisioned in namespace 11/30/22 13:17:31.583 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/30/22 13:17:31.666 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:103 STEP: Initial size of ca-minion-group-1: 1 11/30/22 13:17:35.625 STEP: Initial size of ca-minion-group: 1 11/30/22 13:17:39.158 Nov 30 13:17:39.203: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Initial number of schedulable nodes: 2 11/30/22 13:17:39.251 [It] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] test/e2e/autoscaling/cluster_size_autoscaling.go:877 STEP: Find smallest node group and manually scale it to a single node 11/30/22 13:17:39.251 Nov 30 13:17:39.251: INFO: Skipping dumping logs from cluster Nov 30 13:17:43.842: INFO: Skipping dumping logs from cluster Nov 30 13:17:43.885: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Target node for scale-down: ca-minion-group-wp8h 11/30/22 13:17:47.456 STEP: Make the single node unschedulable 11/30/22 13:17:47.456 STEP: Taint node ca-minion-group-wp8h 11/30/22 13:17:47.456 STEP: Manually drain the single node 11/30/22 13:17:47.557 I1130 13:17:48.300295 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 13:18:08.345586 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 13:18:28.393097 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 13:18:48.437522 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 13:19:08.481997 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 13:19:28.526555 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 13:19:48.570863 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 13:20:08.615198 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 13:20:28.659853 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 13:20:48.705940 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 13:21:08.751585 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 13:21:28.795479 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 13:21:48.840087 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 13:22:08.884205 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 13:22:28.928164 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m7.799s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 5m0.003s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 4m51.697s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:22:48.972020 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m27.802s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 5m20.005s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 5m11.699s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:23:09.018000 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m47.805s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 5m40.008s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 5m31.703s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:23:29.062573 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m7.806s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 6m0.009s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 5m51.704s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:23:49.107406 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m27.809s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 6m20.013s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 6m11.707s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:24:09.154675 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m47.811s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 6m40.014s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 6m31.708s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:24:29.202026 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m7.813s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 7m0.016s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 6m51.711s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:24:49.248996 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m27.816s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 7m20.019s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 7m11.713s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:25:09.296244 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m47.818s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 7m40.022s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 7m31.716s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:25:29.342342 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m7.819s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 8m0.023s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 7m51.717s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:25:49.391197 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m27.821s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 8m20.024s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 8m11.718s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:26:09.438136 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m47.824s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 8m40.027s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 8m31.721s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:26:29.485588 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m7.826s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 9m0.029s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 8m51.723s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:26:49.529096 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m27.829s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 9m20.032s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 9m11.726s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:27:09.592999 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m47.83s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 9m40.033s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 9m31.727s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:27:29.639632 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m7.832s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 10m0.036s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 9m51.73s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:27:49.685628 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m27.835s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 10m20.038s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 10m11.733s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:28:09.730375 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m47.836s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 10m40.039s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 10m31.733s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:28:29.776004 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m7.838s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 11m0.041s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 10m51.735s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:28:49.821885 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m27.843s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 11m20.046s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 11m11.741s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:29:09.870174 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m47.851s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 11m40.054s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 11m31.748s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:29:29.916600 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m7.854s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 12m0.057s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 11m51.751s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:29:49.961005 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m27.855s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 12m20.059s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 12m11.753s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:30:10.005079 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m47.859s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 12m40.062s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 12m31.757s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:30:30.054147 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m7.862s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 13m0.065s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 12m51.759s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:30:50.097926 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m27.863s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 13m20.066s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 13m11.761s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:31:10.145085 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m47.864s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 13m40.068s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 13m31.762s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:31:30.189024 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m7.868s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 14m0.071s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 13m51.765s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:31:50.233439 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m27.869s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 14m20.072s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 14m11.766s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:32:10.282267 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m47.87s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 14m40.073s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 14m31.767s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:32:30.329040 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m7.871s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 15m0.074s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 14m51.768s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:32:50.377678 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m27.872s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 15m20.075s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 15m11.77s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:33:10.422290 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m47.875s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 15m40.078s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 15m31.773s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:33:30.467699 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m7.878s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 16m0.082s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 15m51.776s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:33:50.515366 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m27.881s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 16m20.085s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 16m11.779s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:34:10.559265 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m47.883s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 16m40.086s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 16m31.78s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:34:30.605050 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m7.884s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 17m0.087s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 16m51.782s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:34:50.649261 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m27.886s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 17m20.089s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 17m11.783s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:35:10.696247 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m47.887s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 17m40.09s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 17m31.784s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:35:30.740137 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m7.888s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 18m0.091s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 17m51.785s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:35:50.794698 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m27.89s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 18m20.094s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 18m11.788s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:36:10.838105 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m47.892s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 18m40.095s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 18m31.789s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:36:30.884366 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m7.894s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 19m0.098s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 18m51.792s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:36:50.928941 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m27.897s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 19m20.1s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 19m11.795s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:37:10.977845 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m47.899s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 19m40.102s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 19m31.796s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 13:37:31.021947 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 20m7.903s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 In [It] (Node Runtime: 20m0.106s) test/e2e/autoscaling/cluster_size_autoscaling.go:877 At [By Step] Manually drain the single node (Step Runtime: 19m51.8s) test/e2e/autoscaling/cluster_size_autoscaling.go:1465 Spec Goroutine goroutine 10721 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc004d03e88, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x279c35e, 0x7fb9a58}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 13:37:51.025: INFO: Unexpected error: <*errors.errorString | 0xc000eaaf40>: { s: "timeout waiting 20m0s for appropriate cluster size", } Nov 30 13:37:51.025: FAIL: timeout waiting 20m0s for appropriate cluster size Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.30() test/e2e/autoscaling/cluster_size_autoscaling.go:868 +0x429 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.31() test/e2e/autoscaling/cluster_size_autoscaling.go:881 +0x95 [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/node/init/init.go:32 Nov 30 13:37:51.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:139 STEP: Restoring initial size of the cluster 11/30/22 13:37:51.07 Nov 30 13:37:58.530: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Remove taint from node ca-master 11/30/22 13:37:58.573 STEP: Remove taint from node ca-minion-group-1-mm7j 11/30/22 13:37:58.616 STEP: Remove taint from node ca-minion-group-wp8h 11/30/22 13:37:58.658 I1130 13:37:58.751806 8016 cluster_size_autoscaling.go:165] Made nodes schedulable again in 178.78843ms [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/30/22 13:37:58.752 STEP: Collecting events from namespace "autoscaling-9836". 11/30/22 13:37:58.752 STEP: Found 0 events. 11/30/22 13:37:58.796 Nov 30 13:37:58.837: INFO: POD NODE PHASE GRACE CONDITIONS Nov 30 13:37:58.837: INFO: Nov 30 13:37:58.880: INFO: Logging node info for node ca-master Nov 30 13:37:58.922: INFO: Node Info: &Node{ObjectMeta:{ca-master 2a25a1e5-76d1-4d88-8f78-b63dca9ba016 54479 0 2022-11-30 08:55:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-30 08:55:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-30 08:55:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-30 08:56:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-30 13:37:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 08:55:56 +0000 UTC,LastTransitionTime:2022-11-30 08:55:56 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 13:37:18 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 13:37:18 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 13:37:18 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 13:37:18 +0000 UTC,LastTransitionTime:2022-11-30 08:56:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.76.149,},NodeAddress{Type:InternalDNS,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bac070a384fefb9133ee7878e15673cf,SystemUUID:bac070a3-84fe-fb91-33ee-7878e15673cf,BootID:fe19ddf9-af1e-416e-a389-0ed6e929f60e,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/autoscaling/cluster-autoscaler@sha256:07ab8c89cd0ad296ddb6347febe196d8fe0e1c757656a98f71199860d87cf1a5 registry.k8s.io/autoscaling/cluster-autoscaler:v1.22.0],SizeBytes:24220268,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 13:37:58.923: INFO: Logging kubelet events for node ca-master Nov 30 13:37:58.966: INFO: Logging pods the kubelet thinks is on node ca-master Nov 30 13:37:59.028: INFO: kube-apiserver-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 13:37:59.028: INFO: Container kube-apiserver ready: true, restart count 0 Nov 30 13:37:59.028: INFO: kube-controller-manager-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 13:37:59.028: INFO: Container kube-controller-manager ready: true, restart count 1 Nov 30 13:37:59.028: INFO: kube-scheduler-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 13:37:59.028: INFO: Container kube-scheduler ready: true, restart count 0 Nov 30 13:37:59.028: INFO: etcd-server-events-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 13:37:59.028: INFO: Container etcd-container ready: true, restart count 0 Nov 30 13:37:59.028: INFO: etcd-server-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 13:37:59.029: INFO: Container etcd-container ready: true, restart count 0 Nov 30 13:37:59.029: INFO: cluster-autoscaler-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 13:37:59.029: INFO: Container cluster-autoscaler ready: true, restart count 2 Nov 30 13:37:59.029: INFO: l7-lb-controller-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 13:37:59.029: INFO: Container l7-lb-controller ready: true, restart count 2 Nov 30 13:37:59.029: INFO: konnectivity-server-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 13:37:59.029: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 30 13:37:59.029: INFO: kube-addon-manager-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 13:37:59.029: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 30 13:37:59.029: INFO: metadata-proxy-v0.1-vp7mp started at 2022-11-30 08:56:16 +0000 UTC (0+2 container statuses recorded) Nov 30 13:37:59.029: INFO: Container metadata-proxy ready: true, restart count 0 Nov 30 13:37:59.029: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 30 13:37:59.220: INFO: Latency metrics for node ca-master Nov 30 13:37:59.220: INFO: Logging node info for node ca-minion-group-1-mm7j Nov 30 13:37:59.265: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-1-mm7j 40d95e1e-df8f-497c-a618-47cbb01b66c1 54140 0 2022-11-30 13:15:02 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-1-mm7j kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-30 13:15:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-30 13:15:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.51.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-30 13:15:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-30 13:34:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-30 13:35:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.51.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-1-mm7j,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.51.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-30 13:35:08 +0000 UTC,LastTransitionTime:2022-11-30 13:15:06 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-30 13:35:08 +0000 UTC,LastTransitionTime:2022-11-30 13:15:06 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-30 13:35:08 +0000 UTC,LastTransitionTime:2022-11-30 13:15:06 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-30 13:35:08 +0000 UTC,LastTransitionTime:2022-11-30 13:15:06 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-30 13:35:08 +0000 UTC,LastTransitionTime:2022-11-30 13:15:06 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-30 13:35:08 +0000 UTC,LastTransitionTime:2022-11-30 13:15:06 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-30 13:35:08 +0000 UTC,LastTransitionTime:2022-11-30 13:15:06 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 13:15:09 +0000 UTC,LastTransitionTime:2022-11-30 13:15:09 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 13:34:25 +0000 UTC,LastTransitionTime:2022-11-30 13:15:02 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 13:34:25 +0000 UTC,LastTransitionTime:2022-11-30 13:15:02 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 13:34:25 +0000 UTC,LastTransitionTime:2022-11-30 13:15:02 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 13:34:25 +0000 UTC,LastTransitionTime:2022-11-30 13:15:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.53,},NodeAddress{Type:ExternalIP,Address:34.145.0.223,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-1-mm7j.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-1-mm7j.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ae2a11e0059c5fa7bcad4e6bceb44819,SystemUUID:ae2a11e0-059c-5fa7-bcad-4e6bceb44819,BootID:27318124-2ed5-493a-91c2-cb421fd44484,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 13:37:59.265: INFO: Logging kubelet events for node ca-minion-group-1-mm7j Nov 30 13:37:59.309: INFO: Logging pods the kubelet thinks is on node ca-minion-group-1-mm7j Nov 30 13:37:59.387: INFO: metadata-proxy-v0.1-g8lnd started at 2022-11-30 13:15:03 +0000 UTC (0+2 container statuses recorded) Nov 30 13:37:59.387: INFO: Container metadata-proxy ready: true, restart count 0 Nov 30 13:37:59.387: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 30 13:37:59.387: INFO: konnectivity-agent-sv8wx started at 2022-11-30 13:15:09 +0000 UTC (0+1 container statuses recorded) Nov 30 13:37:59.387: INFO: Container konnectivity-agent ready: true, restart count 0 Nov 30 13:37:59.387: INFO: coredns-6d97d5ddb-99wqh started at 2022-11-30 13:17:47 +0000 UTC (0+1 container statuses recorded) Nov 30 13:37:59.387: INFO: Container coredns ready: true, restart count 0 Nov 30 13:37:59.387: INFO: l7-default-backend-8549d69d99-lvtrj started at 2022-11-30 13:17:47 +0000 UTC (0+1 container statuses recorded) Nov 30 13:37:59.387: INFO: Container default-http-backend ready: true, restart count 0 Nov 30 13:37:59.387: INFO: metrics-server-v0.5.2-867b8754b9-pmffs started at 2022-11-30 13:17:48 +0000 UTC (0+2 container statuses recorded) Nov 30 13:37:59.387: INFO: Container metrics-server ready: true, restart count 0 Nov 30 13:37:59.387: INFO: Container metrics-server-nanny ready: true, restart count 0 Nov 30 13:37:59.387: INFO: volume-snapshot-controller-0 started at 2022-11-30 13:17:48 +0000 UTC (0+1 container statuses recorded) Nov 30 13:37:59.387: INFO: Container volume-snapshot-controller ready: true, restart count 0 Nov 30 13:37:59.387: INFO: kube-proxy-ca-minion-group-1-mm7j started at 2022-11-30 13:15:02 +0000 UTC (0+1 container statuses recorded) Nov 30 13:37:59.387: INFO: Container kube-proxy ready: true, restart count 0 Nov 30 13:37:59.575: INFO: Latency metrics for node ca-minion-group-1-mm7j Nov 30 13:37:59.575: INFO: Logging node info for node ca-minion-group-wp8h Nov 30 13:37:59.618: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-wp8h cabb7ed6-6a4e-4a14-a7cb-07ef65191e0f 54584 0 2022-11-30 09:09:55 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-wp8h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-30 09:09:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-30 09:10:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.7.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-30 09:10:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-30 13:34:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-30 13:35:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.7.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-wp8h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.7.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-30 13:35:30 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-30 13:35:30 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-30 13:35:30 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-30 13:35:30 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-30 13:35:30 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-30 13:35:30 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-30 13:35:30 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 09:10:06 +0000 UTC,LastTransitionTime:2022-11-30 09:10:06 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 13:34:12 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 13:34:12 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 13:34:12 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 13:34:12 +0000 UTC,LastTransitionTime:2022-11-30 09:09:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.8,},NodeAddress{Type:ExternalIP,Address:34.168.80.138,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-wp8h.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-wp8h.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b506b63fb01c6040e71588bca8be6fdd,SystemUUID:b506b63f-b01c-6040-e715-88bca8be6fdd,BootID:bd2be204-29ef-43fe-9f42-c8f31fa19831,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 13:37:59.618: INFO: Logging kubelet events for node ca-minion-group-wp8h Nov 30 13:37:59.663: INFO: Logging pods the kubelet thinks is on node ca-minion-group-wp8h Nov 30 13:37:59.724: INFO: metadata-proxy-v0.1-gxjsj started at 2022-11-30 13:17:48 +0000 UTC (0+2 container statuses recorded) Nov 30 13:37:59.724: INFO: Container metadata-proxy ready: true, restart count 0 Nov 30 13:37:59.724: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 30 13:37:59.724: INFO: konnectivity-agent-rr56r started at 2022-11-30 13:37:58 +0000 UTC (0+1 container statuses recorded) Nov 30 13:37:59.724: INFO: Container konnectivity-agent ready: false, restart count 0 Nov 30 13:37:59.724: INFO: kube-proxy-ca-minion-group-wp8h started at 2022-11-30 09:09:56 +0000 UTC (0+1 container statuses recorded) Nov 30 13:37:59.724: INFO: Container kube-proxy ready: true, restart count 0 Nov 30 13:37:59.898: INFO: Latency metrics for node ca-minion-group-wp8h [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "autoscaling-9836" for this suite. 11/30/22 13:37:59.898
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\sbe\sable\sto\sscale\sdown\sby\sdraining\smultiple\spods\sone\sby\sone\sas\sdictated\sby\spdb\[Feature\:ClusterSizeAutoscalingScaleDown\]$'
test/e2e/autoscaling/cluster_size_autoscaling.go:127 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.1() test/e2e/autoscaling/cluster_size_autoscaling.go:127 +0x319from junit_01.xml
[BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/30/22 12:22:18.618 Nov 30 12:22:18.618: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename autoscaling 11/30/22 12:22:18.62 STEP: Waiting for a default service account to be provisioned in namespace 11/30/22 12:22:18.746 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/30/22 12:22:18.827 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:103 STEP: Initial size of ca-minion-group-1: 1 11/30/22 12:22:22.489 STEP: Initial size of ca-minion-group: 1 11/30/22 12:22:26.232 Nov 30 12:22:26.275: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Initial number of schedulable nodes: 1 11/30/22 12:22:26.319 Nov 30 12:22:26.319: FAIL: Expected <int>: 1 to equal <int>: 2 Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.1() test/e2e/autoscaling/cluster_size_autoscaling.go:127 +0x319 [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/node/init/init.go:32 Nov 30 12:22:26.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:139 STEP: Restoring initial size of the cluster 11/30/22 12:22:26.363 Nov 30 12:22:33.544: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Remove taint from node ca-master 11/30/22 12:22:33.588 STEP: Remove taint from node ca-minion-group-1-ng86 11/30/22 12:22:33.631 STEP: Remove taint from node ca-minion-group-wp8h 11/30/22 12:22:33.673 I1130 12:22:33.716701 8016 cluster_size_autoscaling.go:165] Made nodes schedulable again in 128.212225ms [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/30/22 12:22:33.716 STEP: Collecting events from namespace "autoscaling-2571". 11/30/22 12:22:33.717 STEP: Found 0 events. 11/30/22 12:22:33.758 Nov 30 12:22:33.799: INFO: POD NODE PHASE GRACE CONDITIONS Nov 30 12:22:33.799: INFO: Nov 30 12:22:33.844: INFO: Logging node info for node ca-master Nov 30 12:22:33.887: INFO: Node Info: &Node{ObjectMeta:{ca-master 2a25a1e5-76d1-4d88-8f78-b63dca9ba016 41235 0 2022-11-30 08:55:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-30 08:55:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-30 08:55:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-30 08:56:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-30 12:20:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 08:55:56 +0000 UTC,LastTransitionTime:2022-11-30 08:55:56 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 12:20:41 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 12:20:41 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 12:20:41 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 12:20:41 +0000 UTC,LastTransitionTime:2022-11-30 08:56:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.76.149,},NodeAddress{Type:InternalDNS,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bac070a384fefb9133ee7878e15673cf,SystemUUID:bac070a3-84fe-fb91-33ee-7878e15673cf,BootID:fe19ddf9-af1e-416e-a389-0ed6e929f60e,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/autoscaling/cluster-autoscaler@sha256:07ab8c89cd0ad296ddb6347febe196d8fe0e1c757656a98f71199860d87cf1a5 registry.k8s.io/autoscaling/cluster-autoscaler:v1.22.0],SizeBytes:24220268,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 12:22:33.887: INFO: Logging kubelet events for node ca-master Nov 30 12:22:33.931: INFO: Logging pods the kubelet thinks is on node ca-master Nov 30 12:22:33.983: INFO: kube-controller-manager-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:33.983: INFO: Container kube-controller-manager ready: true, restart count 1 Nov 30 12:22:33.983: INFO: kube-scheduler-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:33.983: INFO: Container kube-scheduler ready: true, restart count 0 Nov 30 12:22:33.983: INFO: etcd-server-events-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:33.983: INFO: Container etcd-container ready: true, restart count 0 Nov 30 12:22:33.983: INFO: etcd-server-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:33.983: INFO: Container etcd-container ready: true, restart count 0 Nov 30 12:22:33.983: INFO: cluster-autoscaler-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:33.983: INFO: Container cluster-autoscaler ready: true, restart count 2 Nov 30 12:22:33.983: INFO: l7-lb-controller-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:33.983: INFO: Container l7-lb-controller ready: true, restart count 2 Nov 30 12:22:33.983: INFO: kube-apiserver-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:33.983: INFO: Container kube-apiserver ready: true, restart count 0 Nov 30 12:22:33.983: INFO: kube-addon-manager-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:33.983: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 30 12:22:33.983: INFO: metadata-proxy-v0.1-vp7mp started at 2022-11-30 08:56:16 +0000 UTC (0+2 container statuses recorded) Nov 30 12:22:33.983: INFO: Container metadata-proxy ready: true, restart count 0 Nov 30 12:22:33.983: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 30 12:22:33.983: INFO: konnectivity-server-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:33.983: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 30 12:22:34.170: INFO: Latency metrics for node ca-master Nov 30 12:22:34.170: INFO: Logging node info for node ca-minion-group-1-ng86 Nov 30 12:22:34.212: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-1-ng86 f24aab78-1d08-4bd6-a17d-7633ede5752e 41461 0 2022-11-30 12:06:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-1-ng86 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-30 12:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.36.0/24\"":{}}}} } {kubelet Update v1 2022-11-30 12:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-30 12:06:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-30 12:17:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-30 12:21:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cluster-autoscaler Update v1 2022-11-30 12:21:56 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}} }]},Spec:NodeSpec{PodCIDR:10.64.36.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-1-ng86,Unschedulable:false,Taints:[]Taint{Taint{Key:DeletionCandidateOfClusterAutoscaler,Value:1669810523,Effect:PreferNoSchedule,TimeAdded:<nil>,},Taint{Key:ToBeDeletedByClusterAutoscaler,Value:1669810916,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.36.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-30 12:21:39 +0000 UTC,LastTransitionTime:2022-11-30 12:06:36 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-30 12:21:39 +0000 UTC,LastTransitionTime:2022-11-30 12:06:36 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-30 12:21:39 +0000 UTC,LastTransitionTime:2022-11-30 12:06:36 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-30 12:21:39 +0000 UTC,LastTransitionTime:2022-11-30 12:06:36 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-30 12:21:39 +0000 UTC,LastTransitionTime:2022-11-30 12:06:36 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-30 12:21:39 +0000 UTC,LastTransitionTime:2022-11-30 12:06:36 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-30 12:21:39 +0000 UTC,LastTransitionTime:2022-11-30 12:06:36 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 12:06:44 +0000 UTC,LastTransitionTime:2022-11-30 12:06:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 12:17:17 +0000 UTC,LastTransitionTime:2022-11-30 12:06:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 12:17:17 +0000 UTC,LastTransitionTime:2022-11-30 12:06:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 12:17:17 +0000 UTC,LastTransitionTime:2022-11-30 12:06:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 12:17:17 +0000 UTC,LastTransitionTime:2022-11-30 12:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.38,},NodeAddress{Type:ExternalIP,Address:35.227.188.214,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-1-ng86.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-1-ng86.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:76a3d95baa1a25ed5dd35eb8cdcd500b,SystemUUID:76a3d95b-aa1a-25ed-5dd3-5eb8cdcd500b,BootID:d6a11685-71f0-4130-b2e0-e89e6d0946c0,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 12:22:34.213: INFO: Logging kubelet events for node ca-minion-group-1-ng86 Nov 30 12:22:34.272: INFO: Logging pods the kubelet thinks is on node ca-minion-group-1-ng86 Nov 30 12:22:39.324: INFO: Unable to retrieve kubelet pods for node ca-minion-group-1-ng86: error trying to reach service: dial tcp 10.138.0.38:10250: i/o timeout Nov 30 12:22:39.324: INFO: Logging node info for node ca-minion-group-wp8h Nov 30 12:22:39.367: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-wp8h cabb7ed6-6a4e-4a14-a7cb-07ef65191e0f 41182 0 2022-11-30 09:09:55 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-wp8h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-30 09:09:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-30 09:10:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.7.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-30 09:10:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-30 12:17:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-30 12:20:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.7.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-wp8h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.7.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-30 12:20:21 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-30 12:20:21 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-30 12:20:21 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-30 12:20:21 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-30 12:20:21 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-30 12:20:21 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-30 12:20:21 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 09:10:06 +0000 UTC,LastTransitionTime:2022-11-30 09:10:06 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 12:17:41 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 12:17:41 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 12:17:41 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 12:17:41 +0000 UTC,LastTransitionTime:2022-11-30 09:09:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.8,},NodeAddress{Type:ExternalIP,Address:34.168.80.138,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-wp8h.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-wp8h.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b506b63fb01c6040e71588bca8be6fdd,SystemUUID:b506b63f-b01c-6040-e715-88bca8be6fdd,BootID:bd2be204-29ef-43fe-9f42-c8f31fa19831,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 12:22:39.368: INFO: Logging kubelet events for node ca-minion-group-wp8h Nov 30 12:22:39.413: INFO: Logging pods the kubelet thinks is on node ca-minion-group-wp8h Nov 30 12:22:39.465: INFO: l7-default-backend-8549d69d99-sn7gr started at 2022-11-30 09:38:58 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:39.465: INFO: Container default-http-backend ready: true, restart count 0 Nov 30 12:22:39.465: INFO: metadata-proxy-v0.1-kx6wg started at 2022-11-30 09:09:56 +0000 UTC (0+2 container statuses recorded) Nov 30 12:22:39.465: INFO: Container metadata-proxy ready: true, restart count 0 Nov 30 12:22:39.465: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 30 12:22:39.465: INFO: konnectivity-agent-hh8bs started at 2022-11-30 09:10:06 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:39.465: INFO: Container konnectivity-agent ready: true, restart count 0 Nov 30 12:22:39.465: INFO: kube-proxy-ca-minion-group-wp8h started at 2022-11-30 09:09:56 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:39.465: INFO: Container kube-proxy ready: true, restart count 0 Nov 30 12:22:39.465: INFO: volume-snapshot-controller-0 started at 2022-11-30 10:04:18 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:39.465: INFO: Container volume-snapshot-controller ready: true, restart count 0 Nov 30 12:22:39.465: INFO: coredns-6d97d5ddb-fwzcx started at 2022-11-30 09:38:58 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:39.465: INFO: Container coredns ready: true, restart count 0 Nov 30 12:22:39.465: INFO: metrics-server-v0.5.2-867b8754b9-4qcz5 started at 2022-11-30 09:33:57 +0000 UTC (0+2 container statuses recorded) Nov 30 12:22:39.465: INFO: Container metrics-server ready: true, restart count 0 Nov 30 12:22:39.465: INFO: Container metrics-server-nanny ready: true, restart count 0 Nov 30 12:22:39.647: INFO: Latency metrics for node ca-minion-group-wp8h [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "autoscaling-2571" for this suite. 11/30/22 12:22:39.647
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\sbe\sable\sto\sscale\sdown\sby\sdraining\ssystem\spods\swith\spdb\[Feature\:ClusterSizeAutoscalingScaleDown\]$'
test/e2e/autoscaling/cluster_size_autoscaling.go:748 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 +0x94 k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 +0x842 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 +0x57from junit_01.xml
[BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/30/22 09:06:40.493 Nov 30 09:06:40.494: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename autoscaling 11/30/22 09:06:40.495 STEP: Waiting for a default service account to be provisioned in namespace 11/30/22 09:06:40.633 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/30/22 09:06:40.714 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:103 STEP: Initial size of ca-minion-group-1: 1 11/30/22 09:06:44.726 STEP: Initial size of ca-minion-group: 1 11/30/22 09:06:48.23 Nov 30 09:06:48.275: INFO: Condition Ready of node ca-minion-group-09pl is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 09:06:48.275: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 1 Nov 30 09:07:08.320: INFO: Condition Ready of node ca-minion-group-09pl is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669798561 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669799166 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 09:06:46 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 09:06:51 +0000 UTC}]. Failure Nov 30 09:07:08.320: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 1 Nov 30 09:07:28.365: INFO: Condition Ready of node ca-minion-group-09pl is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669798561 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669799166 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 09:06:46 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 09:06:51 +0000 UTC}]. Failure Nov 30 09:07:28.365: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 1 Nov 30 09:07:48.411: INFO: Condition Ready of node ca-minion-group-09pl is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669798561 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669799166 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 09:06:46 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 09:06:51 +0000 UTC}]. Failure Nov 30 09:07:48.411: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 1 Nov 30 09:08:08.459: INFO: Condition Ready of node ca-minion-group-09pl is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669798561 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669799166 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 09:06:46 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 09:06:51 +0000 UTC}]. Failure Nov 30 09:08:08.459: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 1 Nov 30 09:08:28.502: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Initial number of schedulable nodes: 2 11/30/22 09:08:28.545 [It] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] test/e2e/autoscaling/cluster_size_autoscaling.go:745 STEP: Manually increase cluster size 11/30/22 09:08:28.546 STEP: Setting size of ca-minion-group-1 to 3 11/30/22 09:08:32.026 Nov 30 09:08:32.027: INFO: Skipping dumping logs from cluster Nov 30 09:08:36.493: INFO: Skipping dumping logs from cluster STEP: Setting size of ca-minion-group to 3 11/30/22 09:08:39.887 Nov 30 09:08:39.887: INFO: Skipping dumping logs from cluster Nov 30 09:08:44.322: INFO: Skipping dumping logs from cluster STEP: Setting size of ca-minion-group to 3 11/30/22 09:08:47.762 Nov 30 09:08:47.763: INFO: Skipping dumping logs from cluster Nov 30 09:08:52.350: INFO: Skipping dumping logs from cluster W1130 09:08:55.926416 8016 cluster_size_autoscaling.go:1758] Unexpected node group size while waiting for cluster resize. Setting size to target again. I1130 09:08:55.926484 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 09:09:22.851843 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 09:09:49.680569 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 09:10:09.729001 8016 cluster_size_autoscaling.go:1381] Cluster has reached the desired size STEP: Run a pod on each node 11/30/22 09:10:09.772 STEP: Taint node ca-minion-group-1-85zq 11/30/22 09:10:09.772 STEP: Taint node ca-minion-group-1-bjq8 11/30/22 09:10:09.866 STEP: Taint node ca-minion-group-1-f12z 11/30/22 09:10:09.959 STEP: Taint node ca-minion-group-5khq 11/30/22 09:10:10.051 STEP: Taint node ca-minion-group-8gtp 11/30/22 09:10:10.145 STEP: Taint node ca-minion-group-wp8h 11/30/22 09:10:10.241 STEP: creating replication controller reschedulable-pods in namespace kube-system 11/30/22 09:10:10.335 I1130 09:10:10.380664 8016 runners.go:193] Created replication controller with name: reschedulable-pods, namespace: kube-system, replica count: 0 STEP: Remove taint from node ca-minion-group-1-85zq 11/30/22 09:10:10.473 STEP: Taint node ca-minion-group-1-85zq 11/30/22 09:10:15.71 STEP: Remove taint from node ca-minion-group-1-bjq8 11/30/22 09:10:15.802 STEP: Taint node ca-minion-group-1-bjq8 11/30/22 09:10:21.072 STEP: Remove taint from node ca-minion-group-1-f12z 11/30/22 09:10:21.168 STEP: Taint node ca-minion-group-1-f12z 11/30/22 09:10:26.445 STEP: Remove taint from node ca-minion-group-5khq 11/30/22 09:10:26.538 STEP: Taint node ca-minion-group-5khq 11/30/22 09:10:31.775 STEP: Remove taint from node ca-minion-group-8gtp 11/30/22 09:10:31.867 STEP: Taint node ca-minion-group-8gtp 11/30/22 09:10:37.114 STEP: Remove taint from node ca-minion-group-wp8h 11/30/22 09:10:37.294 STEP: Taint node ca-minion-group-wp8h 11/30/22 09:10:42.534 STEP: Remove taint from node ca-minion-group-wp8h 11/30/22 09:10:42.635 STEP: Remove taint from node ca-minion-group-8gtp 11/30/22 09:10:42.739 STEP: Remove taint from node ca-minion-group-5khq 11/30/22 09:10:42.833 STEP: Remove taint from node ca-minion-group-1-f12z 11/30/22 09:10:42.925 STEP: Remove taint from node ca-minion-group-1-bjq8 11/30/22 09:10:43.018 STEP: Remove taint from node ca-minion-group-1-85zq 11/30/22 09:10:43.116 STEP: Create a PodDisruptionBudget 11/30/22 09:10:43.21 STEP: Some node should be removed 11/30/22 09:10:43.255 I1130 09:10:43.301338 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1130 09:11:03.348543 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1130 09:11:23.394282 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1130 09:11:43.440185 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1130 09:12:03.486924 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1130 09:12:23.534204 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1130 09:12:43.582002 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1130 09:13:03.628854 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1130 09:13:23.675906 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m48.054s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 5m0.001s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 2m45.292s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:13:43.722158 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m8.055s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 5m20.002s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 3m5.293s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:14:03.769217 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m28.056s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 5m40.004s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 3m25.294s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:14:23.815355 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m48.057s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 6m0.005s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 3m45.295s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:14:43.863142 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m8.059s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 6m20.006s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 4m5.297s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:15:03.910356 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m28.061s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 6m40.009s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 4m25.299s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:15:23.956590 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m48.062s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 7m0.009s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 4m45.3s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:15:44.002653 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m8.064s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 7m20.011s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 5m5.302s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:16:04.050524 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m28.065s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 7m40.012s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 5m25.302s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:16:24.096326 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m48.067s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 8m0.014s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 5m45.305s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:16:44.144090 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m8.068s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 8m20.015s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 6m5.306s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:17:04.191107 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m28.07s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 8m40.017s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 6m25.308s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:17:24.243966 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m48.072s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 9m0.02s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 6m45.31s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:17:44.290709 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m8.073s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 9m20.021s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 7m5.311s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:18:04.338457 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m28.076s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 9m40.023s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 7m25.314s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:18:24.384959 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m48.077s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 10m0.024s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 7m45.315s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:18:44.430696 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m8.078s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 10m20.025s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 8m5.316s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:19:04.477733 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m28.08s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 10m40.027s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 8m25.318s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:19:24.523522 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m48.081s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 11m0.028s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 8m45.319s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:19:44.571264 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m8.083s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 11m20.031s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 9m5.321s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:20:04.618772 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m28.085s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 11m40.032s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 9m25.323s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:20:24.663707 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m48.087s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 12m0.034s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 9m45.325s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:20:44.708824 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m8.089s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 12m20.036s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 10m5.327s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:21:04.755464 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m28.091s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 12m40.039s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 10m25.329s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:21:24.802113 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m48.094s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 13m0.041s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 10m45.332s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:21:44.851703 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m8.096s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 13m20.043s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 11m5.334s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:22:04.906565 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m28.099s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 13m40.046s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 11m25.337s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:22:24.952160 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m48.1s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 14m0.048s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 11m45.338s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:22:44.999985 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m8.103s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 14m20.05s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 12m5.341s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:23:05.048365 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m28.104s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 14m40.052s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 12m25.342s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:23:25.096124 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m48.106s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 15m0.053s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 12m45.344s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:23:45.142660 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m8.109s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 15m20.056s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 13m5.347s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:24:05.189126 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m28.111s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 15m40.059s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 13m25.349s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:24:25.237050 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m48.115s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 16m0.063s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 13m45.353s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:24:45.293528 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m8.12s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 16m20.067s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 14m5.358s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:25:05.340659 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m28.122s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 16m40.07s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 14m25.36s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:25:25.386660 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m48.123s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 17m0.071s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 14m45.361s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:25:45.434653 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m8.124s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 17m20.072s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 15m5.362s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:26:05.483929 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m28.126s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 17m40.074s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 15m25.364s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:26:25.531636 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m48.128s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 18m0.075s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 15m45.366s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:26:45.577810 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 20m8.129s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 18m20.077s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 16m5.367s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:27:05.623608 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 20m28.13s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 18m40.078s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 16m25.368s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:27:25.671746 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 20m48.132s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 19m0.08s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 16m45.37s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:27:45.719513 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 21m8.134s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 19m20.082s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 17m5.372s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:28:05.767176 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 21m28.136s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 19m40.083s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 17m25.374s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:28:25.813527 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 21m48.138s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 20m0.086s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 17m45.376s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:28:45.861310 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 22m8.139s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 20m20.087s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 18m5.377s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:29:05.910653 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 22m28.141s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 20m40.088s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 18m25.379s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:29:25.957283 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 22m48.142s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 21m0.089s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 18m45.38s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:29:46.005357 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 23m8.144s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 21m20.091s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 19m5.382s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:30:06.050948 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 23m28.146s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 21m40.093s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 19m25.384s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 09:30:26.101254 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 23m48.148s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 In [It] (Node Runtime: 22m0.095s) test/e2e/autoscaling/cluster_size_autoscaling.go:745 At [By Step] Some node should be removed (Step Runtime: 19m45.386s) test/e2e/autoscaling/cluster_size_autoscaling.go:747 Spec Goroutine goroutine 1113 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc002e6eb60}, 0xc001685ba0, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 > k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00091ea80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 09:30:46.106: INFO: Unexpected error: <*errors.errorString | 0xc000ed6820>: { s: "timeout waiting 20m0s for appropriate cluster size", } Nov 30 09:30:46.106: FAIL: timeout waiting 20m0s for appropriate cluster size Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27.1(0x6) test/e2e/autoscaling/cluster_size_autoscaling.go:748 +0x94 k8s.io/kubernetes/test/e2e/autoscaling.runDrainTest(0xc00119cc30, 0x7fa3ee0?, {0x75ce977, 0xb}, 0x2, 0x1, 0xc002041f58) test/e2e/autoscaling/cluster_size_autoscaling.go:1061 +0x842 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.27() test/e2e/autoscaling/cluster_size_autoscaling.go:746 +0x57 STEP: deleting ReplicationController reschedulable-pods in namespace kube-system, will wait for the garbage collector to delete the pods 11/30/22 09:30:46.152 Nov 30 09:30:46.292: INFO: Deleting ReplicationController reschedulable-pods took: 45.757672ms Nov 30 09:30:46.493: INFO: Terminating ReplicationController reschedulable-pods pods took: 201.199299ms [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/node/init/init.go:32 Nov 30 09:30:47.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:139 STEP: Restoring initial size of the cluster 11/30/22 09:30:47.741 STEP: Setting size of ca-minion-group-1 to 1 11/30/22 09:30:52.448 Nov 30 09:30:52.448: INFO: Skipping dumping logs from cluster Nov 30 09:30:57.097: INFO: Skipping dumping logs from cluster STEP: Setting size of ca-minion-group to 1 11/30/22 09:31:00.758 Nov 30 09:31:00.758: INFO: Skipping dumping logs from cluster Nov 30 09:31:05.440: INFO: Skipping dumping logs from cluster Nov 30 09:31:05.486: INFO: Waiting for ready nodes 2, current ready 6, not ready nodes 0 Nov 30 09:31:25.535: INFO: Waiting for ready nodes 2, current ready 6, not ready nodes 0 Nov 30 09:31:45.583: INFO: Condition Ready of node ca-minion-group-1-85zq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-30 09:31:32 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 09:31:37 +0000 UTC}]. Failure Nov 30 09:31:45.583: INFO: Condition Ready of node ca-minion-group-1-bjq8 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 09:31:45.583: INFO: Condition Ready of node ca-minion-group-5khq is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 09:31:45.583: INFO: Waiting for ready nodes 2, current ready 3, not ready nodes 3 Nov 30 09:32:05.630: INFO: Condition Ready of node ca-minion-group-1-85zq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-30 09:31:32 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 09:31:37 +0000 UTC}]. Failure Nov 30 09:32:05.630: INFO: Condition Ready of node ca-minion-group-1-bjq8 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 09:32:05.630: INFO: Condition Ready of node ca-minion-group-5khq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-30 09:31:42 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 09:31:47 +0000 UTC}]. Failure Nov 30 09:32:05.630: INFO: Condition Ready of node ca-minion-group-8gtp is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 09:32:05.630: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 4 Nov 30 09:32:25.679: INFO: Condition Ready of node ca-minion-group-1-85zq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-30 09:31:32 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 09:31:37 +0000 UTC}]. Failure Nov 30 09:32:25.679: INFO: Condition Ready of node ca-minion-group-1-bjq8 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 09:32:25.679: INFO: Condition Ready of node ca-minion-group-5khq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-30 09:31:42 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 09:31:47 +0000 UTC}]. Failure Nov 30 09:32:25.679: INFO: Condition Ready of node ca-minion-group-8gtp is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 09:32:25.679: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 4 Nov 30 09:32:45.727: INFO: Condition Ready of node ca-minion-group-1-85zq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-30 09:31:32 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 09:31:37 +0000 UTC}]. Failure Nov 30 09:32:45.727: INFO: Condition Ready of node ca-minion-group-1-bjq8 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 09:32:45.727: INFO: Condition Ready of node ca-minion-group-5khq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-30 09:31:42 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 09:31:47 +0000 UTC}]. Failure Nov 30 09:32:45.727: INFO: Condition Ready of node ca-minion-group-8gtp is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 09:32:45.727: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 4 Nov 30 09:33:05.775: INFO: Condition Ready of node ca-minion-group-1-bjq8 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 09:33:05.775: INFO: Condition Ready of node ca-minion-group-5khq is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-30 09:31:42 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 09:31:47 +0000 UTC}]. Failure Nov 30 09:33:05.775: INFO: Condition Ready of node ca-minion-group-8gtp is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 09:33:05.775: INFO: Waiting for ready nodes 2, current ready 2, not ready nodes 3 Nov 30 09:33:25.823: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Remove taint from node ca-master 11/30/22 09:33:25.866 STEP: Remove taint from node ca-minion-group-1-f12z 11/30/22 09:33:25.909 STEP: Remove taint from node ca-minion-group-wp8h 11/30/22 09:33:25.951 I1130 09:33:25.994526 8016 cluster_size_autoscaling.go:165] Made nodes schedulable again in 127.603015ms [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/30/22 09:33:25.995 STEP: Collecting events from namespace "autoscaling-5449". 11/30/22 09:33:25.995 STEP: Found 0 events. 11/30/22 09:33:26.036 Nov 30 09:33:26.077: INFO: POD NODE PHASE GRACE CONDITIONS Nov 30 09:33:26.078: INFO: Nov 30 09:33:26.120: INFO: Logging node info for node ca-master Nov 30 09:33:26.163: INFO: Node Info: &Node{ObjectMeta:{ca-master 2a25a1e5-76d1-4d88-8f78-b63dca9ba016 7282 0 2022-11-30 08:55:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-30 08:55:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-30 08:55:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-30 08:56:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-30 09:32:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 08:55:56 +0000 UTC,LastTransitionTime:2022-11-30 08:55:56 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 09:32:18 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 09:32:18 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 09:32:18 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 09:32:18 +0000 UTC,LastTransitionTime:2022-11-30 08:56:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.76.149,},NodeAddress{Type:InternalDNS,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bac070a384fefb9133ee7878e15673cf,SystemUUID:bac070a3-84fe-fb91-33ee-7878e15673cf,BootID:fe19ddf9-af1e-416e-a389-0ed6e929f60e,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/autoscaling/cluster-autoscaler@sha256:07ab8c89cd0ad296ddb6347febe196d8fe0e1c757656a98f71199860d87cf1a5 registry.k8s.io/autoscaling/cluster-autoscaler:v1.22.0],SizeBytes:24220268,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 09:33:26.164: INFO: Logging kubelet events for node ca-master Nov 30 09:33:26.267: INFO: Logging pods the kubelet thinks is on node ca-master Nov 30 09:33:26.370: INFO: konnectivity-server-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 09:33:26.370: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 30 09:33:26.370: INFO: kube-addon-manager-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 09:33:26.370: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 30 09:33:26.370: INFO: metadata-proxy-v0.1-vp7mp started at 2022-11-30 08:56:16 +0000 UTC (0+2 container statuses recorded) Nov 30 09:33:26.370: INFO: Container metadata-proxy ready: true, restart count 0 Nov 30 09:33:26.370: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 30 09:33:26.370: INFO: kube-apiserver-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 09:33:26.370: INFO: Container kube-apiserver ready: true, restart count 0 Nov 30 09:33:26.370: INFO: kube-controller-manager-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 09:33:26.370: INFO: Container kube-controller-manager ready: true, restart count 1 Nov 30 09:33:26.370: INFO: kube-scheduler-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 09:33:26.370: INFO: Container kube-scheduler ready: true, restart count 0 Nov 30 09:33:26.370: INFO: etcd-server-events-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 09:33:26.370: INFO: Container etcd-container ready: true, restart count 0 Nov 30 09:33:26.370: INFO: etcd-server-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 09:33:26.370: INFO: Container etcd-container ready: true, restart count 0 Nov 30 09:33:26.370: INFO: cluster-autoscaler-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 09:33:26.370: INFO: Container cluster-autoscaler ready: true, restart count 2 Nov 30 09:33:26.370: INFO: l7-lb-controller-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 09:33:26.370: INFO: Container l7-lb-controller ready: true, restart count 2 Nov 30 09:33:26.581: INFO: Latency metrics for node ca-master Nov 30 09:33:26.581: INFO: Logging node info for node ca-minion-group-1-f12z Nov 30 09:33:26.623: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-1-f12z 046aca94-fd09-4055-8339-9864012a52d7 6913 0 2022-11-30 09:09:46 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-1-f12z kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-30 09:09:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-30 09:09:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.5.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-30 09:09:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-30 09:29:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-30 09:30:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.5.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-1-f12z,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.5.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-30 09:29:54 +0000 UTC,LastTransitionTime:2022-11-30 09:09:50 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-30 09:29:54 +0000 UTC,LastTransitionTime:2022-11-30 09:09:50 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-30 09:29:54 +0000 UTC,LastTransitionTime:2022-11-30 09:09:50 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-30 09:29:54 +0000 UTC,LastTransitionTime:2022-11-30 09:09:50 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-30 09:29:54 +0000 UTC,LastTransitionTime:2022-11-30 09:09:50 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-30 09:29:54 +0000 UTC,LastTransitionTime:2022-11-30 09:09:50 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-30 09:29:54 +0000 UTC,LastTransitionTime:2022-11-30 09:09:50 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 09:09:56 +0000 UTC,LastTransitionTime:2022-11-30 09:09:56 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 09:30:40 +0000 UTC,LastTransitionTime:2022-11-30 09:09:46 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 09:30:40 +0000 UTC,LastTransitionTime:2022-11-30 09:09:46 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 09:30:40 +0000 UTC,LastTransitionTime:2022-11-30 09:09:46 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 09:30:40 +0000 UTC,LastTransitionTime:2022-11-30 09:09:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.7,},NodeAddress{Type:ExternalIP,Address:34.105.60.48,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-1-f12z.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-1-f12z.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:869dca29c4ad6ec2b63500da3dd8835b,SystemUUID:869dca29-c4ad-6ec2-b635-00da3dd8835b,BootID:b6ae2af2-9795-4af1-907c-98ba6d9ee172,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 09:33:26.624: INFO: Logging kubelet events for node ca-minion-group-1-f12z Nov 30 09:33:26.673: INFO: Logging pods the kubelet thinks is on node ca-minion-group-1-f12z Nov 30 09:33:26.736: INFO: kube-proxy-ca-minion-group-1-f12z started at 2022-11-30 09:09:46 +0000 UTC (0+1 container statuses recorded) Nov 30 09:33:26.736: INFO: Container kube-proxy ready: true, restart count 0 Nov 30 09:33:26.736: INFO: metadata-proxy-v0.1-2l9sk started at 2022-11-30 09:09:47 +0000 UTC (0+2 container statuses recorded) Nov 30 09:33:26.736: INFO: Container metadata-proxy ready: true, restart count 0 Nov 30 09:33:26.736: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 30 09:33:26.736: INFO: konnectivity-agent-2lgfz started at 2022-11-30 09:09:56 +0000 UTC (0+1 container statuses recorded) Nov 30 09:33:26.736: INFO: Container konnectivity-agent ready: true, restart count 0 Nov 30 09:33:26.908: INFO: Latency metrics for node ca-minion-group-1-f12z Nov 30 09:33:26.908: INFO: Logging node info for node ca-minion-group-wp8h Nov 30 09:33:26.951: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-wp8h cabb7ed6-6a4e-4a14-a7cb-07ef65191e0f 7052 0 2022-11-30 09:09:55 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-wp8h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-30 09:09:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-30 09:10:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.7.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-30 09:10:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-30 09:30:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-30 09:30:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.7.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-wp8h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.7.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-30 09:30:02 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-30 09:30:02 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-30 09:30:02 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-30 09:30:02 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-30 09:30:02 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-30 09:30:02 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-30 09:30:02 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 09:10:06 +0000 UTC,LastTransitionTime:2022-11-30 09:10:06 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 09:30:48 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 09:30:48 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 09:30:48 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 09:30:48 +0000 UTC,LastTransitionTime:2022-11-30 09:09:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.8,},NodeAddress{Type:ExternalIP,Address:34.168.80.138,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-wp8h.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-wp8h.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b506b63fb01c6040e71588bca8be6fdd,SystemUUID:b506b63f-b01c-6040-e715-88bca8be6fdd,BootID:bd2be204-29ef-43fe-9f42-c8f31fa19831,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 09:33:26.952: INFO: Logging kubelet events for node ca-minion-group-wp8h Nov 30 09:33:26.997: INFO: Logging pods the kubelet thinks is on node ca-minion-group-wp8h Nov 30 09:33:27.059: INFO: konnectivity-agent-hh8bs started at 2022-11-30 09:10:06 +0000 UTC (0+1 container statuses recorded) Nov 30 09:33:27.060: INFO: Container konnectivity-agent ready: true, restart count 0 Nov 30 09:33:27.060: INFO: kube-proxy-ca-minion-group-wp8h started at 2022-11-30 09:09:56 +0000 UTC (0+1 container statuses recorded) Nov 30 09:33:27.060: INFO: Container kube-proxy ready: true, restart count 0 Nov 30 09:33:27.060: INFO: metadata-proxy-v0.1-kx6wg started at 2022-11-30 09:09:56 +0000 UTC (0+2 container statuses recorded) Nov 30 09:33:27.060: INFO: Container metadata-proxy ready: true, restart count 0 Nov 30 09:33:27.060: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 30 09:33:27.234: INFO: Latency metrics for node ca-minion-group-wp8h [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "autoscaling-5449" for this suite. 11/30/22 09:33:27.235
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\scorrectly\sscale\sdown\safter\sa\snode\sis\snot\sneeded\sand\sone\snode\sis\sbroken\s\[Feature\:ClusterSizeAutoscalingScaleDown\]$'
test/e2e/framework/network/utils.go:1158 k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1158 +0x26a k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 +0xd7 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 +0x4b1 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 +0x89from junit_01.xml
[BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/30/22 12:29:37.128 Nov 30 12:29:37.128: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename autoscaling 11/30/22 12:29:37.129 STEP: Waiting for a default service account to be provisioned in namespace 11/30/22 12:29:37.261 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/30/22 12:29:37.343 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:103 STEP: Initial size of ca-minion-group-1: 1 11/30/22 12:29:41.05 STEP: Initial size of ca-minion-group: 1 11/30/22 12:29:44.609 Nov 30 12:29:44.654: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Initial number of schedulable nodes: 2 11/30/22 12:29:44.699 [It] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] test/e2e/autoscaling/cluster_size_autoscaling.go:691 Nov 30 12:29:44.743: INFO: Getting external IP address for ca-minion-group-1-wcgp STEP: block network traffic from node ca-minion-group-1-wcgp to the control plane 11/30/22 12:29:44.784 Nov 30 12:29:44.785: INFO: Waiting 2m0s to ensure node ca-minion-group-1-wcgp is ready before beginning test... Nov 30 12:29:44.785: INFO: Waiting up to 2m0s for node ca-minion-group-1-wcgp condition Ready to be true Nov 30 12:29:44.827: INFO: block network traffic from 34.82.84.140:22 to 35.230.76.149 Nov 30 12:29:45.354: INFO: Waiting 2m0s for node ca-minion-group-1-wcgp to be not ready after simulated network failure Nov 30 12:29:45.354: INFO: Waiting up to 2m0s for node ca-minion-group-1-wcgp condition Ready to be false Nov 30 12:29:45.397: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 30 12:29:47.442: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 30 12:29:49.487: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 30 12:29:51.531: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 30 12:29:53.574: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 30 12:29:55.617: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 30 12:29:57.661: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 30 12:29:59.704: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 30 12:30:01.748: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 30 12:30:03.793: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 30 12:30:05.837: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 30 12:30:07.880: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 30 12:30:09.923: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 30 12:30:11.968: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 30 12:30:14.011: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 30 12:30:16.055: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 30 12:30:18.099: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 30 12:30:20.142: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 30 12:30:22.185: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Nov 30 12:30:24.236: INFO: Condition Ready of node ca-minion-group-1-wcgp is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled STEP: Create PodDisruptionBudgets for kube-system components, so they can be migrated if required 11/30/22 12:30:26.279 STEP: Create PodDisruptionBudget for kube-dns 11/30/22 12:30:26.28 STEP: Create PodDisruptionBudget for kube-dns-autoscaler 11/30/22 12:30:26.323 STEP: Create PodDisruptionBudget for metrics-server 11/30/22 12:30:26.367 STEP: Create PodDisruptionBudget for kubernetes-dashboard 11/30/22 12:30:26.41 STEP: Create PodDisruptionBudget for glbc 11/30/22 12:30:26.453 STEP: Manually increase cluster size 11/30/22 12:30:26.496 STEP: Setting size of ca-minion-group-1 to 4 11/30/22 12:30:30.062 Nov 30 12:30:30.062: INFO: Skipping dumping logs from cluster Nov 30 12:30:34.576: INFO: Skipping dumping logs from cluster STEP: Setting size of ca-minion-group to 4 11/30/22 12:30:38.316 Nov 30 12:30:38.316: INFO: Skipping dumping logs from cluster Nov 30 12:30:43.029: INFO: Skipping dumping logs from cluster Nov 30 12:30:43.076: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:30:43.076120 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 1 Nov 30 12:31:03.123: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:31:03.123474 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 1 Nov 30 12:31:23.170: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:31:23.170187 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 1 Nov 30 12:31:43.217: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:31:43.217919 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 1 Nov 30 12:32:03.302: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:32:03.302984 8016 cluster_size_autoscaling.go:1381] Cluster has reached the desired size STEP: Some node should be removed 11/30/22 12:32:03.302 Nov 30 12:32:03.351: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:32:03.352009 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 Nov 30 12:32:23.404: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:32:23.404264 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 Nov 30 12:32:43.456: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:32:43.456523 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 Nov 30 12:33:03.508: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:33:03.508245 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 Nov 30 12:33:23.559: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:33:23.559514 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 Nov 30 12:33:43.610: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:33:43.610242 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 Nov 30 12:34:03.661: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:34:03.661393 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 Nov 30 12:34:23.712: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:34:23.712430 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 Nov 30 12:34:43.763: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:34:43.763130 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m7.572s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 5m0.001s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 2m41.397s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:35:03.815: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:35:03.815400 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m27.574s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 5m20.002s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 3m1.399s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:35:23.867: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:35:23.867447 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m47.575s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 5m40.003s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 3m21.4s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:35:43.918: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:35:43.918626 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m7.576s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 6m0.004s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 3m41.401s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:36:03.970: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:36:03.970198 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m27.578s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 6m20.006s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 4m1.403s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:36:24.022: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:36:24.022456 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m47.579s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 6m40.007s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 4m21.404s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:36:44.076: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:36:44.076184 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m7.58s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 7m0.009s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 4m41.405s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:37:04.130: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:37:04.130237 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m27.581s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 7m20.01s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 5m1.406s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:37:24.184: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:37:24.184514 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m47.584s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 7m40.012s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 5m21.409s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:37:44.249: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:37:44.249739 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m7.585s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 8m0.013s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 5m41.41s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:38:04.325: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:38:04.325643 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m27.586s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 8m20.014s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 6m1.411s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:38:24.380: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:38:24.380531 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m47.587s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 8m40.016s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 6m21.412s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:38:44.432: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:38:44.432654 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m7.589s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 9m0.017s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 6m41.414s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:39:04.483: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:39:04.483062 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m27.59s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 9m20.018s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 7m1.415s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:39:24.535: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:39:24.535384 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m47.591s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 9m40.02s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 7m21.416s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:39:44.589: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:39:44.590207 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m7.593s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 10m0.022s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 7m41.418s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:40:04.645: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:40:04.645486 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m27.595s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 10m20.024s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 8m1.42s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:40:24.697: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:40:24.697257 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m47.597s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 10m40.025s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 8m21.422s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m7.598s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 11m0.027s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 8m41.423s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc00091ea80, 0xc0004b0800) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002178600, 0xc0004b0800, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003c0e000?}, 0xc0004b0800?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003c0e000, 0xc0004b0800) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc003b6b410?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0035b7500, 0xc0000ef400) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc00114acc0, 0xc0000eea00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc0000eea00, {0x7fad100, 0xc00114acc0}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0035b7560, 0xc0000eea00, {0x7fd477bfe108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0035b7560, 0xc0000eea00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc0000ee200, {0x7fe0bc8, 0xc0001ae008}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc0000ee200, {0x7fe0bc8, 0xc0001ae008}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*nodes).List(0xc004f24460, {0x7fe0bc8, 0xc0001ae008}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, 0x0}, {0xc00348c3a8, ...}, ...}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/node.go:93 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1365 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:40:44.750: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:40:44.750068 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m27.599s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 11m20.028s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 9m1.424s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:41:04.805: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:41:04.805149 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m47.601s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 11m40.029s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 9m21.426s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:41:24.856: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:41:24.856895 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m7.602s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 12m0.031s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 9m41.427s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:41:44.913: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:41:44.913658 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m27.604s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 12m20.032s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 10m1.429s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:42:04.972: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:42:04.972422 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m47.606s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 12m40.035s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 10m21.431s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:42:25.024: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure I1130 12:42:25.024048 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m7.609s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 13m0.038s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 10m41.434s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:42:45.074: INFO: Condition Ready of node ca-minion-group-1-5jwg is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:42:45.074: INFO: Condition Ready of node ca-minion-group-1-vv3b is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:42:45.074: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure Nov 30 12:42:45.074: INFO: Condition Ready of node ca-minion-group-zfch is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669811513 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669812118 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:42:34 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:42:39 +0000 UTC}]. Failure I1130 12:42:45.074927 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 4 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m27.613s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 13m20.042s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 11m1.438s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:43:05.124: INFO: Condition Ready of node ca-minion-group-1-5jwg is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:43:05.124: INFO: Condition Ready of node ca-minion-group-1-hp4k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:43:05.124: INFO: Condition Ready of node ca-minion-group-1-vv3b is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:43:05.124: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC}]. Failure Nov 30 12:43:05.124: INFO: Condition Ready of node ca-minion-group-8j9g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:43:05.124: INFO: Condition Ready of node ca-minion-group-g8g3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:43:05.124: INFO: Condition Ready of node ca-minion-group-zfch is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669811513 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669812118 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:42:34 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:42:39 +0000 UTC}]. Failure I1130 12:43:05.124540 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 7 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m47.616s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 13m40.044s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 11m21.441s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:43:25.175: INFO: Condition Ready of node ca-minion-group-1-5jwg is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:43:25.175: INFO: Condition Ready of node ca-minion-group-1-hp4k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:43:25.175: INFO: Condition Ready of node ca-minion-group-1-vv3b is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:43:25.175: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC} {ToBeDeletedByClusterAutoscaler 1669812198 NoSchedule <nil>}]. Failure Nov 30 12:43:25.175: INFO: Condition Ready of node ca-minion-group-8j9g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:43:25.175: INFO: Condition Ready of node ca-minion-group-g8g3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:43:25.175: INFO: Condition Ready of node ca-minion-group-zfch is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669811513 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669812118 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:42:34 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:42:39 +0000 UTC}]. Failure I1130 12:43:25.175445 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 7 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m7.619s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 14m0.047s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 11m41.444s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:43:45.225: INFO: Condition Ready of node ca-minion-group-1-5jwg is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:43:45.225: INFO: Condition Ready of node ca-minion-group-1-hp4k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:43:45.225: INFO: Condition Ready of node ca-minion-group-1-vv3b is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:43:45.225: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC} {ToBeDeletedByClusterAutoscaler 1669812198 NoSchedule <nil>}]. Failure Nov 30 12:43:45.225: INFO: Condition Ready of node ca-minion-group-8j9g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:43:45.225: INFO: Condition Ready of node ca-minion-group-g8g3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:43:45.225: INFO: Condition Ready of node ca-minion-group-zfch is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669811513 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669812118 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:42:34 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:42:39 +0000 UTC}]. Failure I1130 12:43:45.225973 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 7 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m27.62s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 14m20.048s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 12m1.445s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:44:05.274: INFO: Condition Ready of node ca-minion-group-1-5jwg is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:44:05.274: INFO: Condition Ready of node ca-minion-group-1-hp4k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:44:05.274: INFO: Condition Ready of node ca-minion-group-1-vv3b is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:44:05.274: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC} {ToBeDeletedByClusterAutoscaler 1669812198 NoSchedule <nil>}]. Failure Nov 30 12:44:05.274: INFO: Condition Ready of node ca-minion-group-8j9g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:44:05.274: INFO: Condition Ready of node ca-minion-group-g8g3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 12:44:05.274: INFO: Condition Ready of node ca-minion-group-zfch is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669811513 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669812118 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:42:34 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:42:39 +0000 UTC}]. Failure I1130 12:44:05.274438 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 8, not ready nodes 7 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m47.621s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 14m40.049s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 12m21.446s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:44:25.320: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC} {ToBeDeletedByClusterAutoscaler 1669812198 NoSchedule <nil>}]. Failure Nov 30 12:44:25.320: INFO: Condition Ready of node ca-minion-group-8j9g is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. I1130 12:44:25.320413 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 3, not ready nodes 2 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m7.624s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 15m0.053s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Some node should be removed (Step Runtime: 12m41.449s) test/e2e/autoscaling/cluster_size_autoscaling.go:683 Spec Goroutine goroutine 8248 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f41520}, 0xc001e63c28, 0x1176592e000, 0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.20(0x1) test/e2e/autoscaling/cluster_size_autoscaling.go:684 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22.1() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1103 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:44:45.365: INFO: Condition Ready of node ca-minion-group-1-wcgp is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669810998 PreferNoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 12:30:24 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 12:30:29 +0000 UTC} {ToBeDeletedByClusterAutoscaler 1669812198 NoSchedule <nil>}]. Failure I1130 12:44:45.365947 8016 cluster_size_autoscaling.go:1381] Cluster has reached the desired size STEP: Delete PodDisruptionBudget test-pdb-for-kube-dns 11/30/22 12:44:45.365 STEP: Delete PodDisruptionBudget test-pdb-for-kube-dns-autoscaler 11/30/22 12:44:45.41 STEP: Delete PodDisruptionBudget test-pdb-for-metrics-server 11/30/22 12:44:45.454 STEP: Delete PodDisruptionBudget test-pdb-for-kubernetes-dashboard 11/30/22 12:44:45.497 STEP: Delete PodDisruptionBudget test-pdb-for-glbc 11/30/22 12:44:45.542 STEP: Unblock network traffic from node ca-minion-group-1-wcgp to the control plane 11/30/22 12:44:45.585 Nov 30 12:44:45.586: INFO: Unblock network traffic from 34.82.84.140:22 to 35.230.76.149 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m27.626s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 15m20.055s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 19.168s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f1b80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f1b80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62e10?, 0x262a61f?}, {0x7fae180?, 0xc00439e280?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f1b80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f264e0}, 0x2634e33?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x17?, 0x1a?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f264e0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c1eab0, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x1?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc00008cf00?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f264e0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c1eab0, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f264e0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c1eab0, {0x7fe0bc8, 0xc0001ae000}, {0xc000f50490?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f50480?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f50490?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc00008c780?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e63618, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x262a967?}, {0xc003acea80?, 0x6b98c60?}, 0x1?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0x3cfb82f?}, {0xc003acea80, 0xf}, 0xc0011ce4e0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x2718bc7?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc003b002a0}) test/e2e/framework/ssh/ssh.go:242 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m47.63s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 15m40.059s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 39.172s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f1b80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f1b80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62e10?, 0x262a61f?}, {0x7fae180?, 0xc00439e280?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f1b80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f264e0}, 0x2634e33?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x17?, 0x1a?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f264e0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c1eab0, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x1?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc00008cf00?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f264e0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c1eab0, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f264e0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c1eab0, {0x7fe0bc8, 0xc0001ae000}, {0xc000f50490?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f50480?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f50490?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc00008c780?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e63618, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x262a967?}, {0xc003acea80?, 0x6b98c60?}, 0x1?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0x3cfb82f?}, {0xc003acea80, 0xf}, 0xc0011ce4e0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x2718bc7?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc003b002a0}) test/e2e/framework/ssh/ssh.go:242 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m7.631s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 16m0.06s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 59.173s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f1b80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f1b80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62e10?, 0x262a61f?}, {0x7fae180?, 0xc00439e280?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f1b80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f264e0}, 0x2634e33?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x17?, 0x1a?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f264e0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c1eab0, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x1?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc00008cf00?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f264e0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c1eab0, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f264e0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c1eab0, {0x7fe0bc8, 0xc0001ae000}, {0xc000f50490?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f50480?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f50490?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc00008c780?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e63618, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x262a967?}, {0xc003acea80?, 0x6b98c60?}, 0x1?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0x3cfb82f?}, {0xc003acea80, 0xf}, 0xc0011ce4e0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x2718bc7?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc003b002a0}) test/e2e/framework/ssh/ssh.go:242 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m27.632s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 16m20.061s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 1m19.174s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 2 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f1b80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f1b80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62e10?, 0x262a61f?}, {0x7fae180?, 0xc00439e280?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f1b80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f264e0}, 0x2634e33?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x17?, 0x1a?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f264e0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c1eab0, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x1?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc00008cf00?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f264e0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c1eab0, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f264e0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c1eab0, {0x7fe0bc8, 0xc0001ae000}, {0xc000f50490?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f50480?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f50490?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc00008c780?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e63618, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x262a967?}, {0xc003acea80?, 0x6b98c60?}, 0x1?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0x3cfb82f?}, {0xc003acea80, 0xf}, 0xc0011ce4e0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x2718bc7?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc003b002a0}) test/e2e/framework/ssh/ssh.go:242 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m47.637s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 16m40.065s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 1m39.179s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 2 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f1b80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f1b80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62e10?, 0x262a61f?}, {0x7fae180?, 0xc00439e280?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f1b80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f264e0}, 0x2634e33?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x17?, 0x1a?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f264e0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c1eab0, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x1?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc00008cf00?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f264e0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c1eab0, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f264e0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c1eab0, {0x7fe0bc8, 0xc0001ae000}, {0xc000f50490?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f50480?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f50490?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc00008c780?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e63618, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x262a967?}, {0xc003acea80?, 0x6b98c60?}, 0x1?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0x3cfb82f?}, {0xc003acea80, 0xf}, 0xc0011ce4e0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x2718bc7?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc003b002a0}) test/e2e/framework/ssh/ssh.go:242 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m7.64s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 17m0.069s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 1m59.182s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 2 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f1b80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f1b80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62e10?, 0x262a61f?}, {0x7fae180?, 0xc00439e280?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f1b80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f264e0}, 0x2634e33?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x17?, 0x1a?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f264e0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c1eab0, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x1?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc00008cf00?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f264e0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c1eab0, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f264e0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c1eab0, {0x7fe0bc8, 0xc0001ae000}, {0xc000f50490?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f50480?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f50490?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc00008c780?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e63618, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x262a967?}, {0xc003acea80?, 0x6b98c60?}, 0x1?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0x3cfb82f?}, {0xc003acea80, 0xf}, 0xc0011ce4e0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x2718bc7?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc003b002a0}) test/e2e/framework/ssh/ssh.go:242 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m27.643s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 17m20.072s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 2m19.185s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0c00?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0c00, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a77d?}, {0x7fae180?, 0xc00439e020?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0c00, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f26060}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x1e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f26060}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x1?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0x7fd477bfe5b8?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f26060?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f26060}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc000f500c0?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f500b0?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f500c0?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0011ce4e0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938018, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x2718bc7?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc003b002a0}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m47.645s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 17m40.073s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 2m39.187s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0c00?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0c00, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a77d?}, {0x7fae180?, 0xc00439e020?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0c00, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f26060}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x1e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f26060}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x1?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0x7fd477bfe5b8?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f26060?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f26060}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc000f500c0?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f500b0?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f500c0?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0011ce4e0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938018, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x2718bc7?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc003b002a0}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m7.649s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 18m0.077s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 2m59.191s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0c00?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0c00, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a77d?}, {0x7fae180?, 0xc00439e020?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0c00, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f26060}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x1e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f26060}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x1?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0x7fd477bfe5b8?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f26060?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f26060}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc000f500c0?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f500b0?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f500c0?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0011ce4e0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938018, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x2718bc7?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc003b002a0}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m27.652s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 18m20.081s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 3m19.194s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 2 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0c00?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0c00, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a77d?}, {0x7fae180?, 0xc00439e020?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0c00, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f26060}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x1e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f26060}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x1?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0x7fd477bfe5b8?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f26060?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f26060}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc000f500c0?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f500b0?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f500c0?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0011ce4e0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938018, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x2718bc7?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc003b002a0}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m47.653s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 18m40.082s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 3m39.195s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 2 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0c00?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0c00, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a77d?}, {0x7fae180?, 0xc00439e020?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0c00, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f26060}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x1e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f26060}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x1?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0x7fd477bfe5b8?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f26060?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f26060}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc000f500c0?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f500b0?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f500c0?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0011ce4e0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938018, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x2718bc7?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc003b002a0}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m7.657s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 19m0.086s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 3m59.199s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 2 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0c00?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0c00, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a77d?}, {0x7fae180?, 0xc00439e020?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0c00, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f26060}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x1e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f26060}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x1?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0x7fd477bfe5b8?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f26060?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f26060}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc000f500c0?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f500b0?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f500c0?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0011ce4e0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938018, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x2718bc7?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc003b002a0}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m27.66s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 19m20.089s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 4m19.202s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 3 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0c00?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0c00, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a77d?}, {0x7fae180?, 0xc00439e020?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0c00, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f26060}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x1e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f26060}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x1?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0x7fd477bfe5b8?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f26060?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f26060}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc000f500c0?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f500b0?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f500c0?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0011ce4e0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938018, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x2718bc7?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc003b002a0}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m47.664s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 19m40.093s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 4m39.206s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0d80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0d80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a61f?}, {0x7fae180?, 0xc00439f240?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0d80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f270b0}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x1e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f270b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc0000bc800?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f270b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f270b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0xc000f10fe0?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f10f60?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f10fe0?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0011ce4e0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938018, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x2718bc7?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc003b002a0}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 20m7.668s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 20m0.096s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 4m59.21s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0d80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0d80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a61f?}, {0x7fae180?, 0xc00439f240?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0d80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f270b0}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x1e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f270b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc0000bc800?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f270b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f270b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0xc000f10fe0?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f10f60?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f10fe0?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0011ce4e0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938018, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x2718bc7?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc003b002a0}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 20m27.671s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 20m20.1s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 5m19.213s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 2 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0d80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0d80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a61f?}, {0x7fae180?, 0xc00439f240?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0d80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f270b0}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x1e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f270b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc0000bc800?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f270b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f270b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0xc000f10fe0?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f10f60?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f10fe0?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0011ce4e0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938018, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x2718bc7?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc003b002a0}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 20m47.674s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 20m40.102s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 5m39.216s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 2 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0d80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0d80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a61f?}, {0x7fae180?, 0xc00439f240?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0d80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f270b0}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x1e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f270b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc0000bc800?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f270b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f270b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0xc000f10fe0?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f10f60?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f10fe0?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0011ce4e0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938018, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x2718bc7?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc003b002a0}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 21m7.677s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 21m0.105s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 5m59.219s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 2 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0d80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0d80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a61f?}, {0x7fae180?, 0xc00439f240?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0d80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f270b0}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x1e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f270b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc0000bc800?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f270b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f270b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0xc000f10fe0?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f10f60?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f10fe0?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0011ce4e0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938018, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x2718bc7?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc003b002a0}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 21m27.679s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 21m20.107s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 6m19.221s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 3 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0d80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0d80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a61f?}, {0x7fae180?, 0xc00439f240?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0d80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f270b0}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x1e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f270b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc0000bc800?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f270b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f270b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0xc000f10fe0?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f10f60?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f10fe0?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0011ce4e0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938018, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x2718bc7?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc003b002a0}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:51:22.161: INFO: ssh prow@34.82.84.140:22: command: sudo iptables --delete OUTPUT --destination 35.230.76.149 --jump REJECT Nov 30 12:51:22.161: INFO: ssh prow@34.82.84.140:22: stdout: "" Nov 30 12:51:22.161: INFO: ssh prow@34.82.84.140:22: stderr: "" Nov 30 12:51:22.161: INFO: ssh prow@34.82.84.140:22: exit code: 0 Nov 30 12:51:22.161: INFO: Unexpected error: error getting SSH client to prow@34.82.84.140:22: 'timed out waiting for the condition' ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 21m47.681s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 21m40.109s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 6m39.223s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0c80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0c80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62e10?, 0x262a61f?}, {0x7fae180?, 0xc00439e000?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0c80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f261b0}, 0x2634e33?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x17?, 0x1a?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f261b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc00008d0e0?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f261b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f261b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc000f10240?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f10230?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f10240?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc00008c960?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e63618, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x262a967?}, {0xc003acea80?, 0x6b98c60?}, 0x1?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0x3cfb82f?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:242 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 22m7.682s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 22m0.111s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 6m59.224s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0c80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0c80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62e10?, 0x262a61f?}, {0x7fae180?, 0xc00439e000?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0c80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f261b0}, 0x2634e33?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x17?, 0x1a?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f261b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc00008d0e0?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f261b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f261b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc000f10240?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f10230?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f10240?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc00008c960?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e63618, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x262a967?}, {0xc003acea80?, 0x6b98c60?}, 0x1?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0x3cfb82f?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:242 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 22m27.685s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 22m20.114s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 7m19.227s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 2 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0c80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0c80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62e10?, 0x262a61f?}, {0x7fae180?, 0xc00439e000?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0c80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f261b0}, 0x2634e33?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x17?, 0x1a?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f261b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc00008d0e0?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f261b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f261b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc000f10240?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f10230?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f10240?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc00008c960?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e63618, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x262a967?}, {0xc003acea80?, 0x6b98c60?}, 0x1?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0x3cfb82f?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:242 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 22m47.687s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 22m40.115s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 7m39.229s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 2 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0c80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0c80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62e10?, 0x262a61f?}, {0x7fae180?, 0xc00439e000?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0c80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f261b0}, 0x2634e33?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x17?, 0x1a?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f261b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc00008d0e0?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f261b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f261b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc000f10240?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f10230?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f10240?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc00008c960?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e63618, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x262a967?}, {0xc003acea80?, 0x6b98c60?}, 0x1?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0x3cfb82f?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:242 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 23m7.691s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 23m0.119s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 7m59.233s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 2 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0c80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0c80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62e10?, 0x262a61f?}, {0x7fae180?, 0xc00439e000?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0c80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f261b0}, 0x2634e33?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x17?, 0x1a?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f261b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc00008d0e0?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f261b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f261b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc000f10240?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f10230?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f10240?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc00008c960?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e63618, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x262a967?}, {0xc003acea80?, 0x6b98c60?}, 0x1?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0x3cfb82f?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:242 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 23m27.692s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 23m20.121s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 8m19.234s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 3 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0c80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0c80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62e10?, 0x262a61f?}, {0x7fae180?, 0xc00439e000?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0c80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f261b0}, 0x2634e33?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x17?, 0x1a?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f261b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc00008d0e0?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f261b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f261b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc000f10240?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f10230?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f10240?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc00008c960?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e63618, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x262a967?}, {0xc003acea80?, 0x6b98c60?}, 0x1?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0x3cfb82f?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:242 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 23m47.696s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 23m40.124s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 8m39.238s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 3 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0c80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0c80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62e10?, 0x262a61f?}, {0x7fae180?, 0xc00439e000?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0c80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f261b0}, 0x2634e33?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0x17?, 0x1a?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f261b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc00008d0e0?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f261b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f261b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc000f10240?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f10230?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f10240?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc00008c960?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e63618, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x262a967?}, {0xc003acea80?, 0x6b98c60?}, 0x1?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0x3cfb82f?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:242 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 24m7.697s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 24m0.126s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 8m59.239s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0d98?, 0x77?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0d80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a61f?}, {0x7fae180?, 0xc00439f280?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0d80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f270b0}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0xc001e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f270b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc0000bc800?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f270b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f270b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0xc000f50880?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f50870?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f50880?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc0023cc1e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 24m27.7s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 24m20.129s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 9m19.242s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 2 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0d98?, 0x77?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0d80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a61f?}, {0x7fae180?, 0xc00439f280?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0d80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f270b0}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0xc001e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f270b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc0000bc800?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f270b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f270b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0xc000f50880?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f50870?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f50880?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc0023cc1e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 24m47.701s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 24m40.13s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 9m39.243s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 2 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0d98?, 0x77?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0d80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a61f?}, {0x7fae180?, 0xc00439f280?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0d80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f270b0}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0xc001e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f270b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc0000bc800?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f270b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f270b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0xc000f50880?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f50870?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f50880?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc0023cc1e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 25m7.705s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 25m0.134s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 9m59.247s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 2 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0d98?, 0x77?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0d80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a61f?}, {0x7fae180?, 0xc00439f280?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0d80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f270b0}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0xc001e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f270b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc0000bc800?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f270b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f270b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0xc000f50880?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f50870?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f50880?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc0023cc1e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 25m27.709s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 25m20.137s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 10m19.251s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 3 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0d98?, 0x77?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0d80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a61f?}, {0x7fae180?, 0xc00439f280?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0d80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f270b0}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0xc001e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f270b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc0000bc800?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f270b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f270b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0xc000f50880?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f50870?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f50880?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc0023cc1e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 25m47.71s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 25m40.139s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 10m39.252s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 3 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0d98?, 0x77?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0d80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a61f?}, {0x7fae180?, 0xc00439f280?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0d80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f270b0}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0xc001e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f270b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc0000bc800?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f270b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f270b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0xc000f50880?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f50870?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f50880?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc0023cc1e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 26m7.714s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 26m0.143s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 10m59.256s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 3 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0000f0d98?, 0x77?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc0000f0d80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a61f?}, {0x7fae180?, 0xc00439f280?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc0000f0d80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc004f270b0}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0xc001e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc004f270b0}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x4?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc0000bc800?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203001?, 0xc004f270b0?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc004f270b0}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a090, {0x7fe0bc8, 0xc0001ae000}, {0xc000f50880?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc000f50870?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc000f50880?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc0023cc1e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 26m27.717s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 26m20.146s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 11m19.259s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 2 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc001542f80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc001542f80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a61f?}, {0x7fae180?, 0xc0005ded20?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc001542f80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc003eb8a20}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0xc001e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc003eb8a20}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x1?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc000500000?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203000?, 0xc003eb8a20?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc003eb8a20}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc004f24c10?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc004f24c00?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc004f24c10?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc0023cc1e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 26m47.72s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 26m40.149s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 11m39.262s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 2 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc001542f80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc001542f80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a61f?}, {0x7fae180?, 0xc0005ded20?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc001542f80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc003eb8a20}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0xc001e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc003eb8a20}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x1?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc000500000?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203000?, 0xc003eb8a20?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc003eb8a20}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc004f24c10?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc004f24c00?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc004f24c10?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc0023cc1e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 27m7.724s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 27m0.153s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 11m59.266s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 2 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc001542f80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc001542f80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a61f?}, {0x7fae180?, 0xc0005ded20?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc001542f80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc003eb8a20}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0xc001e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc003eb8a20}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x1?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc000500000?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203000?, 0xc003eb8a20?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc003eb8a20}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc004f24c10?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc004f24c00?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc004f24c10?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc0023cc1e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 27m27.728s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 27m20.157s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 12m19.27s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 3 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc001542f80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc001542f80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a61f?}, {0x7fae180?, 0xc0005ded20?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc001542f80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc003eb8a20}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0xc001e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc003eb8a20}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x1?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc000500000?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203000?, 0xc003eb8a20?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc003eb8a20}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc004f24c10?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc004f24c00?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc004f24c10?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc0023cc1e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 27m47.732s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 27m40.161s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 12m39.274s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 3 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc001542f80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc001542f80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a61f?}, {0x7fae180?, 0xc0005ded20?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc001542f80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc003eb8a20}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0xc001e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc003eb8a20}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x1?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc000500000?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203000?, 0xc003eb8a20?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc003eb8a20}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc004f24c10?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc004f24c00?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc004f24c10?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc0023cc1e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 28m7.736s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 In [It] (Node Runtime: 28m0.165s) test/e2e/autoscaling/cluster_size_autoscaling.go:691 At [By Step] Unblock network traffic from node ca-minion-group-1-wcgp to the control plane (Step Runtime: 12m59.278s) test/e2e/framework/network/utils.go:1084 Spec Goroutine goroutine 8248 [IO wait, 3 minutes] internal/poll.runtime_pollWait(0x7fd44ff8d248, 0x77) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc001542f80?, 0x75b686b?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitWrite(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:93 internal/poll.(*FD).WaitWrite(...) /usr/local/go/src/internal/poll/fd_unix.go:741 net.(*netFD).connect(0xc001542f80, {0x7fe0bc8?, 0xc0001ae000}, {0xc001e62bb8?, 0x262a61f?}, {0x7fae180?, 0xc0005ded20?}) /usr/local/go/src/net/fd_unix.go:141 net.(*netFD).dial(0xc001542f80, {0x7fe0bc8, 0xc0001ae000}, {0x7fea0e8?, 0x0?}, {0x7fea0e8?, 0xc003eb8a20}, 0x0?) /usr/local/go/src/net/sock_posix.go:149 net.socket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, 0x2, 0x1, 0xc001e62d08?, 0x68?, {0x7fea0e8, 0x0}, ...) /usr/local/go/src/net/sock_posix.go:70 net.internetSocket({0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0x7fea0e8, 0x0}, {0x7fea0e8, 0xc003eb8a20}, 0xc?, 0x0, ...) /usr/local/go/src/net/ipsock_posix.go:142 net.(*sysDialer).doDialTCP(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, 0x0, 0x1?) /usr/local/go/src/net/tcpsock_posix.go:68 net.(*sysDialer).dialTCP(0xc000500000?, {0x7fe0bc8?, 0xc0001ae000?}, 0x203000?, 0xc003eb8a20?) /usr/local/go/src/net/tcpsock_posix.go:64 net.(*sysDialer).dialSingle(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0x7fbddb0?, 0xc003eb8a20}) /usr/local/go/src/net/dial.go:582 net.(*sysDialer).dialSerial(0xc003c3a000, {0x7fe0bc8, 0xc0001ae000}, {0xc004f24c10?, 0x1, 0x294fab5?}) /usr/local/go/src/net/dial.go:550 net.(*sysDialer).dialParallel(0xc004f24c00?, {0x7fe0bc8?, 0xc0001ae000?}, {0xc004f24c10?, 0xc0001ae000?, 0x75b74a2?}, {0x0?, 0x75b686b?, 0xc003acea80?}) /usr/local/go/src/net/dial.go:451 net.(*Dialer).DialContext(0xc001e633c0, {0x7fe0bc8, 0xc0001ae000}, {0x75b686b, 0x3}, {0xc003acea80, 0xf}) /usr/local/go/src/net/dial.go:428 net.(*Dialer).Dial(...) /usr/local/go/src/net/dial.go:355 net.DialTimeout({0x75b686b?, 0x68?}, {0xc003acea80?, 0x0?}, 0x0?) /usr/local/go/src/net/dial.go:337 k8s.io/kubernetes/vendor/golang.org/x/crypto/ssh.Dial({0x75b686b?, 0xc00019c008?}, {0xc003acea80, 0xf}, 0xc0009540d0) vendor/golang.org/x/crypto/ssh/client.go:177 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand.func1() test/e2e/framework/ssh/ssh.go:246 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc0023cc1e0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x60?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0x3cea8db?, 0xc001e636b0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b686b?, 0x3cfb82f?, 0xc003acea80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/ssh.runSSHCommand({0xc0034ce140, 0x47}, {0xc000082075?, 0x3a0bb5d?}, {0xc003acea80, 0xf}, {0x7fb61f0?, 0xc000a00180}) test/e2e/framework/ssh/ssh.go:244 k8s.io/kubernetes/test/e2e/framework/ssh.SSH({0xc0034ce140, 0x47}, {0xc003acea80, 0xf}, {0x7ffde962d9e8, 0x3}) test/e2e/framework/ssh/ssh.go:222 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork.func1() test/e2e/framework/network/utils.go:1147 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001ae000?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc004938ed0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001ae000}, 0x48?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0001ae000}, 0xc001e63d00?, 0xc001e63c98?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x762542e?, 0x19?, 0xc001e63d00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1146 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 > k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004676000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 12:57:59.473: INFO: ssh prow@34.82.84.140:22: command: sudo iptables --delete OUTPUT --destination 35.230.76.149 --jump REJECT Nov 30 12:57:59.473: INFO: ssh prow@34.82.84.140:22: stdout: "" Nov 30 12:57:59.473: INFO: ssh prow@34.82.84.140:22: stderr: "" Nov 30 12:57:59.473: INFO: ssh prow@34.82.84.140:22: exit code: 0 Nov 30 12:57:59.473: INFO: Unexpected error: error getting SSH client to prow@34.82.84.140:22: 'timed out waiting for the condition' Nov 30 12:57:59.473: FAIL: Failed to remove the iptable REJECT rule. Manual intervention is required on host 34.82.84.140:22: remove rule OUTPUT --destination 35.230.76.149 --jump REJECT, if exists Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.UnblockNetwork({0xc003acea80, 0xf}, {0xc003aceb80, 0xd}) test/e2e/framework/network/utils.go:1158 +0x26a k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure.func1() test/e2e/framework/network/utils.go:1086 +0xd7 k8s.io/kubernetes/test/e2e/framework/network.TestUnderTemporaryNetworkFailure({0x801de88, 0xc003f41520}, {0x7fa3ee0?, 0xc0000cf910?}, 0xc000900b00, 0xc002181f58) test/e2e/framework/network/utils.go:1105 +0x4b1 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.22() test/e2e/autoscaling/cluster_size_autoscaling.go:694 +0x89 [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/node/init/init.go:32 Nov 30 12:57:59.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:139 STEP: Restoring initial size of the cluster 11/30/22 12:57:59.643 STEP: Setting size of ca-minion-group-1 to 1 11/30/22 12:58:03.445 Nov 30 12:58:03.445: INFO: Skipping dumping logs from cluster Nov 30 12:58:09.684: INFO: Skipping dumping logs from cluster Nov 30 12:58:13.533: INFO: Waiting for ready nodes 2, current ready 1, not ready nodes 0 Nov 30 12:58:33.577: INFO: Waiting for ready nodes 2, current ready 1, not ready nodes 0 Nov 30 12:58:53.623: INFO: Waiting for ready nodes 2, current ready 1, not ready nodes 0 Nov 30 12:59:13.668: INFO: Waiting for ready nodes 2, current ready 1, not ready nodes 0 Nov 30 12:59:33.713: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Remove taint from node ca-master 11/30/22 12:59:33.763 STEP: Remove taint from node ca-minion-group-1-dzw1 11/30/22 12:59:33.806 STEP: Remove taint from node ca-minion-group-wp8h 11/30/22 12:59:33.849 I1130 12:59:33.892077 8016 cluster_size_autoscaling.go:165] Made nodes schedulable again in 128.764107ms [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/30/22 12:59:33.892 STEP: Collecting events from namespace "autoscaling-6360". 11/30/22 12:59:33.892 STEP: Found 0 events. 11/30/22 12:59:33.933 Nov 30 12:59:33.974: INFO: POD NODE PHASE GRACE CONDITIONS Nov 30 12:59:33.974: INFO: Nov 30 12:59:34.020: INFO: Logging node info for node ca-master Nov 30 12:59:34.062: INFO: Node Info: &Node{ObjectMeta:{ca-master 2a25a1e5-76d1-4d88-8f78-b63dca9ba016 47616 0 2022-11-30 08:55:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-30 08:55:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-30 08:55:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-30 08:56:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-30 12:56:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 08:55:56 +0000 UTC,LastTransitionTime:2022-11-30 08:55:56 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 12:56:25 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 12:56:25 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 12:56:25 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 12:56:25 +0000 UTC,LastTransitionTime:2022-11-30 08:56:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.76.149,},NodeAddress{Type:InternalDNS,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bac070a384fefb9133ee7878e15673cf,SystemUUID:bac070a3-84fe-fb91-33ee-7878e15673cf,BootID:fe19ddf9-af1e-416e-a389-0ed6e929f60e,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/autoscaling/cluster-autoscaler@sha256:07ab8c89cd0ad296ddb6347febe196d8fe0e1c757656a98f71199860d87cf1a5 registry.k8s.io/autoscaling/cluster-autoscaler:v1.22.0],SizeBytes:24220268,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 12:59:34.063: INFO: Logging kubelet events for node ca-master Nov 30 12:59:34.109: INFO: Logging pods the kubelet thinks is on node ca-master Nov 30 12:59:34.211: INFO: konnectivity-server-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 12:59:34.211: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 30 12:59:34.211: INFO: kube-addon-manager-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 12:59:34.211: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 30 12:59:34.211: INFO: metadata-proxy-v0.1-vp7mp started at 2022-11-30 08:56:16 +0000 UTC (0+2 container statuses recorded) Nov 30 12:59:34.211: INFO: Container metadata-proxy ready: true, restart count 0 Nov 30 12:59:34.211: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 30 12:59:34.211: INFO: l7-lb-controller-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 12:59:34.211: INFO: Container l7-lb-controller ready: true, restart count 2 Nov 30 12:59:34.211: INFO: kube-apiserver-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 12:59:34.211: INFO: Container kube-apiserver ready: true, restart count 0 Nov 30 12:59:34.211: INFO: kube-controller-manager-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 12:59:34.211: INFO: Container kube-controller-manager ready: true, restart count 1 Nov 30 12:59:34.211: INFO: kube-scheduler-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 12:59:34.211: INFO: Container kube-scheduler ready: true, restart count 0 Nov 30 12:59:34.211: INFO: etcd-server-events-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 12:59:34.211: INFO: Container etcd-container ready: true, restart count 0 Nov 30 12:59:34.211: INFO: etcd-server-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 12:59:34.211: INFO: Container etcd-container ready: true, restart count 0 Nov 30 12:59:34.211: INFO: cluster-autoscaler-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 12:59:34.211: INFO: Container cluster-autoscaler ready: true, restart count 2 Nov 30 12:59:34.458: INFO: Latency metrics for node ca-master Nov 30 12:59:34.458: INFO: Logging node info for node ca-minion-group-1-dzw1 Nov 30 12:59:34.502: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-1-dzw1 81c292ef-0ce6-4d60-b976-888da333af3d 48099 0 2022-11-30 12:59:15 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-1-dzw1 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-30 12:59:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-11-30 12:59:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-30 12:59:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-30 12:59:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.48.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-30 12:59:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.48.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-1-dzw1,Unschedulable:false,Taints:[]Taint{Taint{Key:DeletionCandidateOfClusterAutoscaler,Value:1669813157,Effect:PreferNoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.48.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-30 12:59:21 +0000 UTC,LastTransitionTime:2022-11-30 12:59:20 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-30 12:59:21 +0000 UTC,LastTransitionTime:2022-11-30 12:59:20 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-30 12:59:21 +0000 UTC,LastTransitionTime:2022-11-30 12:59:20 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-30 12:59:21 +0000 UTC,LastTransitionTime:2022-11-30 12:59:20 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-30 12:59:21 +0000 UTC,LastTransitionTime:2022-11-30 12:59:20 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-30 12:59:21 +0000 UTC,LastTransitionTime:2022-11-30 12:59:20 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-30 12:59:21 +0000 UTC,LastTransitionTime:2022-11-30 12:59:20 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 12:59:29 +0000 UTC,LastTransitionTime:2022-11-30 12:59:29 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 12:59:16 +0000 UTC,LastTransitionTime:2022-11-30 12:59:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 12:59:16 +0000 UTC,LastTransitionTime:2022-11-30 12:59:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 12:59:16 +0000 UTC,LastTransitionTime:2022-11-30 12:59:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 12:59:16 +0000 UTC,LastTransitionTime:2022-11-30 12:59:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.50,},NodeAddress{Type:ExternalIP,Address:34.127.72.223,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-1-dzw1.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-1-dzw1.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2a30f901715ce75de38e7bfdb6cfcb8c,SystemUUID:2a30f901-715c-e75d-e38e-7bfdb6cfcb8c,BootID:5a94539d-5a96-434c-ae90-d3208ae29e82,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:67201736,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 12:59:34.502: INFO: Logging kubelet events for node ca-minion-group-1-dzw1 Nov 30 12:59:34.553: INFO: Logging pods the kubelet thinks is on node ca-minion-group-1-dzw1 Nov 30 12:59:34.628: INFO: metadata-proxy-v0.1-fknwn started at 2022-11-30 12:59:16 +0000 UTC (0+2 container statuses recorded) Nov 30 12:59:34.628: INFO: Container metadata-proxy ready: true, restart count 0 Nov 30 12:59:34.628: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 30 12:59:34.628: INFO: konnectivity-agent-czffw started at 2022-11-30 12:59:29 +0000 UTC (0+1 container statuses recorded) Nov 30 12:59:34.628: INFO: Container konnectivity-agent ready: true, restart count 0 Nov 30 12:59:34.628: INFO: kube-proxy-ca-minion-group-1-dzw1 started at 2022-11-30 12:59:15 +0000 UTC (0+1 container statuses recorded) Nov 30 12:59:34.628: INFO: Container kube-proxy ready: true, restart count 0 Nov 30 12:59:34.816: INFO: Latency metrics for node ca-minion-group-1-dzw1 Nov 30 12:59:34.816: INFO: Logging node info for node ca-minion-group-wp8h Nov 30 12:59:34.859: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-wp8h cabb7ed6-6a4e-4a14-a7cb-07ef65191e0f 47926 0 2022-11-30 09:09:55 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-wp8h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-30 09:09:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-30 09:10:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.7.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-30 09:10:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-30 12:55:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-30 12:58:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.7.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-wp8h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.7.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-30 12:55:26 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-30 12:55:26 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-30 12:55:26 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-30 12:55:26 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-30 12:55:26 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-30 12:55:26 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-30 12:55:26 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 09:10:06 +0000 UTC,LastTransitionTime:2022-11-30 09:10:06 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 12:58:30 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 12:58:30 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 12:58:30 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 12:58:30 +0000 UTC,LastTransitionTime:2022-11-30 09:09:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.8,},NodeAddress{Type:ExternalIP,Address:34.168.80.138,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-wp8h.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-wp8h.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b506b63fb01c6040e71588bca8be6fdd,SystemUUID:b506b63f-b01c-6040-e715-88bca8be6fdd,BootID:bd2be204-29ef-43fe-9f42-c8f31fa19831,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 12:59:34.860: INFO: Logging kubelet events for node ca-minion-group-wp8h Nov 30 12:59:34.911: INFO: Logging pods the kubelet thinks is on node ca-minion-group-wp8h Nov 30 12:59:34.977: INFO: metadata-proxy-v0.1-kx6wg started at 2022-11-30 09:09:56 +0000 UTC (0+2 container statuses recorded) Nov 30 12:59:34.977: INFO: Container metadata-proxy ready: true, restart count 0 Nov 30 12:59:34.977: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 30 12:59:34.977: INFO: konnectivity-agent-hh8bs started at 2022-11-30 09:10:06 +0000 UTC (0+1 container statuses recorded) Nov 30 12:59:34.977: INFO: Container konnectivity-agent ready: true, restart count 0 Nov 30 12:59:34.977: INFO: kube-proxy-ca-minion-group-wp8h started at 2022-11-30 09:09:56 +0000 UTC (0+1 container statuses recorded) Nov 30 12:59:34.977: INFO: Container kube-proxy ready: true, restart count 0 Nov 30 12:59:34.977: INFO: volume-snapshot-controller-0 started at 2022-11-30 10:04:18 +0000 UTC (0+1 container statuses recorded) Nov 30 12:59:34.977: INFO: Container volume-snapshot-controller ready: true, restart count 0 Nov 30 12:59:34.977: INFO: coredns-6d97d5ddb-fwzcx started at 2022-11-30 09:38:58 +0000 UTC (0+1 container statuses recorded) Nov 30 12:59:34.977: INFO: Container coredns ready: true, restart count 0 Nov 30 12:59:34.977: INFO: metrics-server-v0.5.2-867b8754b9-4qcz5 started at 2022-11-30 09:33:57 +0000 UTC (0+2 container statuses recorded) Nov 30 12:59:34.977: INFO: Container metrics-server ready: true, restart count 0 Nov 30 12:59:34.977: INFO: Container metrics-server-nanny ready: true, restart count 0 Nov 30 12:59:34.977: INFO: l7-default-backend-8549d69d99-sn7gr started at 2022-11-30 09:38:58 +0000 UTC (0+1 container statuses recorded) Nov 30 12:59:34.977: INFO: Container default-http-backend ready: true, restart count 0 Nov 30 12:59:35.154: INFO: Latency metrics for node ca-minion-group-wp8h [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "autoscaling-6360" for this suite. 11/30/22 12:59:35.155
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\sincrease\scluster\ssize\sif\spod\srequesting\svolume\sis\spending\s\[Feature\:ClusterSizeAutoscalingScaleUp\]$'
test/e2e/autoscaling/cluster_size_autoscaling.go:127 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.1() test/e2e/autoscaling/cluster_size_autoscaling.go:127 +0x319from junit_01.xml
[BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/30/22 12:21:52.671 Nov 30 12:21:52.672: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename autoscaling 11/30/22 12:21:52.673 STEP: Waiting for a default service account to be provisioned in namespace 11/30/22 12:21:52.806 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/30/22 12:21:52.887 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:103 STEP: Initial size of ca-minion-group-1: 1 11/30/22 12:21:56.421 STEP: Initial size of ca-minion-group: 1 11/30/22 12:21:59.948 Nov 30 12:21:59.993: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Initial number of schedulable nodes: 1 11/30/22 12:22:00.037 Nov 30 12:22:00.038: FAIL: Expected <int>: 1 to equal <int>: 2 Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.1() test/e2e/autoscaling/cluster_size_autoscaling.go:127 +0x319 [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/node/init/init.go:32 Nov 30 12:22:00.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:139 STEP: Restoring initial size of the cluster 11/30/22 12:22:00.083 STEP: Setting size of ca-minion-group-1 to 1 11/30/22 12:22:03.75 Nov 30 12:22:03.750: INFO: Skipping dumping logs from cluster Nov 30 12:22:08.815: INFO: Skipping dumping logs from cluster Nov 30 12:22:12.374: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Remove taint from node ca-master 11/30/22 12:22:12.42 STEP: Remove taint from node ca-minion-group-1-ng86 11/30/22 12:22:12.463 STEP: Remove taint from node ca-minion-group-wp8h 11/30/22 12:22:12.507 I1130 12:22:12.550159 8016 cluster_size_autoscaling.go:165] Made nodes schedulable again in 129.97271ms [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/30/22 12:22:12.55 STEP: Collecting events from namespace "autoscaling-766". 11/30/22 12:22:12.55 STEP: Found 0 events. 11/30/22 12:22:12.593 Nov 30 12:22:12.636: INFO: POD NODE PHASE GRACE CONDITIONS Nov 30 12:22:12.636: INFO: Nov 30 12:22:12.683: INFO: Logging node info for node ca-master Nov 30 12:22:12.725: INFO: Node Info: &Node{ObjectMeta:{ca-master 2a25a1e5-76d1-4d88-8f78-b63dca9ba016 41235 0 2022-11-30 08:55:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-30 08:55:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-30 08:55:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-30 08:56:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-30 12:20:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 08:55:56 +0000 UTC,LastTransitionTime:2022-11-30 08:55:56 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 12:20:41 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 12:20:41 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 12:20:41 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 12:20:41 +0000 UTC,LastTransitionTime:2022-11-30 08:56:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.76.149,},NodeAddress{Type:InternalDNS,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bac070a384fefb9133ee7878e15673cf,SystemUUID:bac070a3-84fe-fb91-33ee-7878e15673cf,BootID:fe19ddf9-af1e-416e-a389-0ed6e929f60e,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/autoscaling/cluster-autoscaler@sha256:07ab8c89cd0ad296ddb6347febe196d8fe0e1c757656a98f71199860d87cf1a5 registry.k8s.io/autoscaling/cluster-autoscaler:v1.22.0],SizeBytes:24220268,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 12:22:12.726: INFO: Logging kubelet events for node ca-master Nov 30 12:22:12.770: INFO: Logging pods the kubelet thinks is on node ca-master Nov 30 12:22:12.837: INFO: konnectivity-server-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:12.837: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 30 12:22:12.837: INFO: kube-addon-manager-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:12.837: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 30 12:22:12.837: INFO: metadata-proxy-v0.1-vp7mp started at 2022-11-30 08:56:16 +0000 UTC (0+2 container statuses recorded) Nov 30 12:22:12.837: INFO: Container metadata-proxy ready: true, restart count 0 Nov 30 12:22:12.837: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 30 12:22:12.837: INFO: cluster-autoscaler-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:12.837: INFO: Container cluster-autoscaler ready: true, restart count 2 Nov 30 12:22:12.837: INFO: l7-lb-controller-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:12.837: INFO: Container l7-lb-controller ready: true, restart count 2 Nov 30 12:22:12.837: INFO: kube-apiserver-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:12.837: INFO: Container kube-apiserver ready: true, restart count 0 Nov 30 12:22:12.837: INFO: kube-controller-manager-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:12.837: INFO: Container kube-controller-manager ready: true, restart count 1 Nov 30 12:22:12.837: INFO: kube-scheduler-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:12.837: INFO: Container kube-scheduler ready: true, restart count 0 Nov 30 12:22:12.837: INFO: etcd-server-events-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:12.837: INFO: Container etcd-container ready: true, restart count 0 Nov 30 12:22:12.837: INFO: etcd-server-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:12.837: INFO: Container etcd-container ready: true, restart count 0 Nov 30 12:22:13.068: INFO: Latency metrics for node ca-master Nov 30 12:22:13.068: INFO: Logging node info for node ca-minion-group-1-ng86 Nov 30 12:22:13.111: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-1-ng86 f24aab78-1d08-4bd6-a17d-7633ede5752e 41461 0 2022-11-30 12:06:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-1-ng86 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-30 12:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.36.0/24\"":{}}}} } {kubelet Update v1 2022-11-30 12:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-30 12:06:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-30 12:17:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-30 12:21:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cluster-autoscaler Update v1 2022-11-30 12:21:56 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}} }]},Spec:NodeSpec{PodCIDR:10.64.36.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-1-ng86,Unschedulable:false,Taints:[]Taint{Taint{Key:DeletionCandidateOfClusterAutoscaler,Value:1669810523,Effect:PreferNoSchedule,TimeAdded:<nil>,},Taint{Key:ToBeDeletedByClusterAutoscaler,Value:1669810916,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.36.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-30 12:21:39 +0000 UTC,LastTransitionTime:2022-11-30 12:06:36 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-30 12:21:39 +0000 UTC,LastTransitionTime:2022-11-30 12:06:36 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-30 12:21:39 +0000 UTC,LastTransitionTime:2022-11-30 12:06:36 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-30 12:21:39 +0000 UTC,LastTransitionTime:2022-11-30 12:06:36 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-30 12:21:39 +0000 UTC,LastTransitionTime:2022-11-30 12:06:36 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-30 12:21:39 +0000 UTC,LastTransitionTime:2022-11-30 12:06:36 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-30 12:21:39 +0000 UTC,LastTransitionTime:2022-11-30 12:06:36 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 12:06:44 +0000 UTC,LastTransitionTime:2022-11-30 12:06:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 12:17:17 +0000 UTC,LastTransitionTime:2022-11-30 12:06:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 12:17:17 +0000 UTC,LastTransitionTime:2022-11-30 12:06:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 12:17:17 +0000 UTC,LastTransitionTime:2022-11-30 12:06:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 12:17:17 +0000 UTC,LastTransitionTime:2022-11-30 12:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.38,},NodeAddress{Type:ExternalIP,Address:35.227.188.214,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-1-ng86.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-1-ng86.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:76a3d95baa1a25ed5dd35eb8cdcd500b,SystemUUID:76a3d95b-aa1a-25ed-5dd3-5eb8cdcd500b,BootID:d6a11685-71f0-4130-b2e0-e89e6d0946c0,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 12:22:13.111: INFO: Logging kubelet events for node ca-minion-group-1-ng86 Nov 30 12:22:13.157: INFO: Logging pods the kubelet thinks is on node ca-minion-group-1-ng86 Nov 30 12:22:18.205: INFO: Unable to retrieve kubelet pods for node ca-minion-group-1-ng86: error trying to reach service: dial tcp 10.138.0.38:10250: i/o timeout Nov 30 12:22:18.205: INFO: Logging node info for node ca-minion-group-wp8h Nov 30 12:22:18.248: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-wp8h cabb7ed6-6a4e-4a14-a7cb-07ef65191e0f 41182 0 2022-11-30 09:09:55 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-wp8h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-30 09:09:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-30 09:10:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.7.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-30 09:10:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-30 12:17:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-30 12:20:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.7.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-wp8h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.7.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-30 12:20:21 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-30 12:20:21 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-30 12:20:21 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-30 12:20:21 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-30 12:20:21 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-30 12:20:21 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-30 12:20:21 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 09:10:06 +0000 UTC,LastTransitionTime:2022-11-30 09:10:06 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 12:17:41 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 12:17:41 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 12:17:41 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 12:17:41 +0000 UTC,LastTransitionTime:2022-11-30 09:09:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.8,},NodeAddress{Type:ExternalIP,Address:34.168.80.138,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-wp8h.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-wp8h.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b506b63fb01c6040e71588bca8be6fdd,SystemUUID:b506b63f-b01c-6040-e715-88bca8be6fdd,BootID:bd2be204-29ef-43fe-9f42-c8f31fa19831,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 12:22:18.248: INFO: Logging kubelet events for node ca-minion-group-wp8h Nov 30 12:22:18.292: INFO: Logging pods the kubelet thinks is on node ca-minion-group-wp8h Nov 30 12:22:18.354: INFO: kube-proxy-ca-minion-group-wp8h started at 2022-11-30 09:09:56 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:18.354: INFO: Container kube-proxy ready: true, restart count 0 Nov 30 12:22:18.354: INFO: volume-snapshot-controller-0 started at 2022-11-30 10:04:18 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:18.354: INFO: Container volume-snapshot-controller ready: true, restart count 0 Nov 30 12:22:18.354: INFO: coredns-6d97d5ddb-fwzcx started at 2022-11-30 09:38:58 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:18.354: INFO: Container coredns ready: true, restart count 0 Nov 30 12:22:18.354: INFO: metrics-server-v0.5.2-867b8754b9-4qcz5 started at 2022-11-30 09:33:57 +0000 UTC (0+2 container statuses recorded) Nov 30 12:22:18.354: INFO: Container metrics-server ready: true, restart count 0 Nov 30 12:22:18.354: INFO: Container metrics-server-nanny ready: true, restart count 0 Nov 30 12:22:18.354: INFO: l7-default-backend-8549d69d99-sn7gr started at 2022-11-30 09:38:58 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:18.354: INFO: Container default-http-backend ready: true, restart count 0 Nov 30 12:22:18.354: INFO: metadata-proxy-v0.1-kx6wg started at 2022-11-30 09:09:56 +0000 UTC (0+2 container statuses recorded) Nov 30 12:22:18.354: INFO: Container metadata-proxy ready: true, restart count 0 Nov 30 12:22:18.354: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 30 12:22:18.354: INFO: konnectivity-agent-hh8bs started at 2022-11-30 09:10:06 +0000 UTC (0+1 container statuses recorded) Nov 30 12:22:18.354: INFO: Container konnectivity-agent ready: true, restart count 0 Nov 30 12:22:18.570: INFO: Latency metrics for node ca-minion-group-wp8h [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "autoscaling-766" for this suite. 11/30/22 12:22:18.57
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\sincrease\scluster\ssize\sif\spods\sare\spending\sdue\sto\spod\santi\-affinity\s\[Feature\:ClusterSizeAutoscalingScaleUp\]$'
test/e2e/autoscaling/cluster_size_autoscaling.go:437 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.15() test/e2e/autoscaling/cluster_size_autoscaling.go:437 +0x10cfrom junit_01.xml
[BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/30/22 13:13:59.108 Nov 30 13:13:59.108: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename autoscaling 11/30/22 13:13:59.11 STEP: Waiting for a default service account to be provisioned in namespace 11/30/22 13:13:59.239 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/30/22 13:13:59.321 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:103 STEP: Initial size of ca-minion-group-1: 1 11/30/22 13:14:02.993 STEP: Initial size of ca-minion-group: 1 11/30/22 13:14:06.752 Nov 30 13:14:06.797: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Initial number of schedulable nodes: 2 11/30/22 13:14:06.841 [It] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp] test/e2e/autoscaling/cluster_size_autoscaling.go:430 STEP: starting a pod with anti-affinity on each node 11/30/22 13:14:06.841 STEP: creating replication controller some-pod in namespace autoscaling-8443 11/30/22 13:14:06.841 I1130 13:14:06.887596 8016 runners.go:193] Created replication controller with name: some-pod, namespace: autoscaling-8443, replica count: 2 I1130 13:14:16.989206 8016 runners.go:193] some-pod Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1130 13:14:26.990143 8016 runners.go:193] some-pod Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1130 13:14:36.991186 8016 runners.go:193] some-pod Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1130 13:14:46.992056 8016 runners.go:193] some-pod Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1130 13:14:56.992340 8016 runners.go:193] some-pod Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1130 13:15:06.993051 8016 runners.go:193] some-pod Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1130 13:15:16.993498 8016 runners.go:193] some-pod Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1130 13:15:26.994411 8016 runners.go:193] some-pod Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1130 13:15:36.995446 8016 runners.go:193] some-pod Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1130 13:15:46.996281 8016 runners.go:193] some-pod Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1130 13:15:56.997013 8016 runners.go:193] some-pod Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1130 13:16:06.997558 8016 runners.go:193] some-pod Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1130 13:16:16.997952 8016 runners.go:193] some-pod Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1130 13:16:26.998261 8016 runners.go:193] some-pod Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1130 13:16:36.999394 8016 runners.go:193] some-pod Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1130 13:16:47.000357 8016 runners.go:193] some-pod Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1130 13:16:57.001488 8016 runners.go:193] some-pod Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1130 13:17:07.002533 8016 runners.go:193] some-pod Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1130 13:17:07.002651 8016 runners.go:193] 1 pods disappeared for some-pod: some-pod-hvdcq I1130 13:17:07.002705 8016 runners.go:193] Pod some-pod-bst5n in phase Pending assigned host ca-minion-group-1-mm7j Pod some-pod-hvdcq was deleted, had phase Pending and host ca-minion-group-wjhb Nov 30 13:17:07.002: INFO: Unexpected error: <*errors.errorString | 0xc000ef01e0>: { s: "1 pods disappeared for some-pod: some-pod-hvdcq", } Nov 30 13:17:07.002: FAIL: 1 pods disappeared for some-pod: some-pod-hvdcq Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.15() test/e2e/autoscaling/cluster_size_autoscaling.go:437 +0x10c [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/node/init/init.go:32 Nov 30 13:17:07.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:139 STEP: Restoring initial size of the cluster 11/30/22 13:17:07.048 Nov 30 13:17:14.518: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Remove taint from node ca-master 11/30/22 13:17:14.564 STEP: Remove taint from node ca-minion-group-1-mm7j 11/30/22 13:17:14.606 STEP: Remove taint from node ca-minion-group-wp8h 11/30/22 13:17:14.651 I1130 13:17:14.694549 8016 cluster_size_autoscaling.go:165] Made nodes schedulable again in 130.20004ms [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/30/22 13:17:14.694 STEP: Collecting events from namespace "autoscaling-8443". 11/30/22 13:17:14.694 STEP: Found 14 events. 11/30/22 13:17:14.737 Nov 30 13:17:14.737: INFO: At 2022-11-30 13:14:06 +0000 UTC - event for some-pod: {replication-controller } SuccessfulCreate: Created pod: some-pod-bt74k Nov 30 13:17:14.737: INFO: At 2022-11-30 13:14:06 +0000 UTC - event for some-pod: {replication-controller } SuccessfulCreate: Created pod: some-pod-hvdcq Nov 30 13:17:14.737: INFO: At 2022-11-30 13:14:06 +0000 UTC - event for some-pod-bt74k: {default-scheduler } Scheduled: Successfully assigned autoscaling-8443/some-pod-bt74k to ca-minion-group-wp8h Nov 30 13:17:14.737: INFO: At 2022-11-30 13:14:06 +0000 UTC - event for some-pod-hvdcq: {default-scheduler } Scheduled: Successfully assigned autoscaling-8443/some-pod-hvdcq to ca-minion-group-wjhb Nov 30 13:17:14.737: INFO: At 2022-11-30 13:14:07 +0000 UTC - event for some-pod-bt74k: {kubelet ca-minion-group-wp8h} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Nov 30 13:17:14.738: INFO: At 2022-11-30 13:14:07 +0000 UTC - event for some-pod-bt74k: {kubelet ca-minion-group-wp8h} Created: Created container some-pod Nov 30 13:17:14.738: INFO: At 2022-11-30 13:14:07 +0000 UTC - event for some-pod-bt74k: {kubelet ca-minion-group-wp8h} Started: Started container some-pod Nov 30 13:17:14.738: INFO: At 2022-11-30 13:17:05 +0000 UTC - event for some-pod: {replication-controller } SuccessfulCreate: Created pod: some-pod-bst5n Nov 30 13:17:14.738: INFO: At 2022-11-30 13:17:05 +0000 UTC - event for some-pod-bst5n: {default-scheduler } Scheduled: Successfully assigned autoscaling-8443/some-pod-bst5n to ca-minion-group-1-mm7j Nov 30 13:17:14.738: INFO: At 2022-11-30 13:17:05 +0000 UTC - event for some-pod-hvdcq: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod autoscaling-8443/some-pod-hvdcq Nov 30 13:17:14.738: INFO: At 2022-11-30 13:17:06 +0000 UTC - event for some-pod-bst5n: {kubelet ca-minion-group-1-mm7j} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-mzz8q" : failed to sync configmap cache: timed out waiting for the condition Nov 30 13:17:14.738: INFO: At 2022-11-30 13:17:07 +0000 UTC - event for some-pod-bst5n: {kubelet ca-minion-group-1-mm7j} Started: Started container some-pod Nov 30 13:17:14.738: INFO: At 2022-11-30 13:17:07 +0000 UTC - event for some-pod-bst5n: {kubelet ca-minion-group-1-mm7j} Created: Created container some-pod Nov 30 13:17:14.738: INFO: At 2022-11-30 13:17:07 +0000 UTC - event for some-pod-bst5n: {kubelet ca-minion-group-1-mm7j} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Nov 30 13:17:14.781: INFO: POD NODE PHASE GRACE CONDITIONS Nov 30 13:17:14.781: INFO: some-pod-bst5n ca-minion-group-1-mm7j Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-30 13:17:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-30 13:17:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-30 13:17:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-30 13:17:05 +0000 UTC }] Nov 30 13:17:14.781: INFO: some-pod-bt74k ca-minion-group-wp8h Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-30 13:14:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-30 13:14:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-30 13:14:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-30 13:14:06 +0000 UTC }] Nov 30 13:17:14.781: INFO: Nov 30 13:17:14.959: INFO: Logging node info for node ca-master Nov 30 13:17:15.002: INFO: Node Info: &Node{ObjectMeta:{ca-master 2a25a1e5-76d1-4d88-8f78-b63dca9ba016 51139 0 2022-11-30 08:55:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-30 08:55:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-30 08:55:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-30 08:56:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-30 13:16:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 08:55:56 +0000 UTC,LastTransitionTime:2022-11-30 08:55:56 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 13:16:50 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 13:16:50 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 13:16:50 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 13:16:50 +0000 UTC,LastTransitionTime:2022-11-30 08:56:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.76.149,},NodeAddress{Type:InternalDNS,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bac070a384fefb9133ee7878e15673cf,SystemUUID:bac070a3-84fe-fb91-33ee-7878e15673cf,BootID:fe19ddf9-af1e-416e-a389-0ed6e929f60e,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/autoscaling/cluster-autoscaler@sha256:07ab8c89cd0ad296ddb6347febe196d8fe0e1c757656a98f71199860d87cf1a5 registry.k8s.io/autoscaling/cluster-autoscaler:v1.22.0],SizeBytes:24220268,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 13:17:15.002: INFO: Logging kubelet events for node ca-master Nov 30 13:17:15.047: INFO: Logging pods the kubelet thinks is on node ca-master Nov 30 13:17:15.114: INFO: etcd-server-events-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 13:17:15.114: INFO: Container etcd-container ready: true, restart count 0 Nov 30 13:17:15.114: INFO: etcd-server-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 13:17:15.114: INFO: Container etcd-container ready: true, restart count 0 Nov 30 13:17:15.114: INFO: cluster-autoscaler-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 13:17:15.114: INFO: Container cluster-autoscaler ready: true, restart count 2 Nov 30 13:17:15.114: INFO: l7-lb-controller-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 13:17:15.114: INFO: Container l7-lb-controller ready: true, restart count 2 Nov 30 13:17:15.114: INFO: kube-apiserver-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 13:17:15.114: INFO: Container kube-apiserver ready: true, restart count 0 Nov 30 13:17:15.114: INFO: kube-controller-manager-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 13:17:15.114: INFO: Container kube-controller-manager ready: true, restart count 1 Nov 30 13:17:15.114: INFO: kube-scheduler-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 13:17:15.114: INFO: Container kube-scheduler ready: true, restart count 0 Nov 30 13:17:15.114: INFO: konnectivity-server-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 13:17:15.114: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 30 13:17:15.114: INFO: kube-addon-manager-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 13:17:15.114: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 30 13:17:15.114: INFO: metadata-proxy-v0.1-vp7mp started at 2022-11-30 08:56:16 +0000 UTC (0+2 container statuses recorded) Nov 30 13:17:15.114: INFO: Container metadata-proxy ready: true, restart count 0 Nov 30 13:17:15.114: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 30 13:17:15.316: INFO: Latency metrics for node ca-master Nov 30 13:17:15.316: INFO: Logging node info for node ca-minion-group-1-mm7j Nov 30 13:17:15.359: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-1-mm7j 40d95e1e-df8f-497c-a618-47cbb01b66c1 51226 0 2022-11-30 13:15:02 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-1-mm7j kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-30 13:15:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-30 13:15:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-30 13:15:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.51.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-30 13:15:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-30 13:15:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.51.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-1-mm7j,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.51.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-30 13:15:07 +0000 UTC,LastTransitionTime:2022-11-30 13:15:06 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-30 13:15:07 +0000 UTC,LastTransitionTime:2022-11-30 13:15:06 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-30 13:15:07 +0000 UTC,LastTransitionTime:2022-11-30 13:15:06 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-30 13:15:07 +0000 UTC,LastTransitionTime:2022-11-30 13:15:06 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-30 13:15:07 +0000 UTC,LastTransitionTime:2022-11-30 13:15:06 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-30 13:15:07 +0000 UTC,LastTransitionTime:2022-11-30 13:15:06 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-30 13:15:07 +0000 UTC,LastTransitionTime:2022-11-30 13:15:06 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 13:15:09 +0000 UTC,LastTransitionTime:2022-11-30 13:15:09 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 13:15:32 +0000 UTC,LastTransitionTime:2022-11-30 13:15:02 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 13:15:32 +0000 UTC,LastTransitionTime:2022-11-30 13:15:02 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 13:15:32 +0000 UTC,LastTransitionTime:2022-11-30 13:15:02 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 13:15:32 +0000 UTC,LastTransitionTime:2022-11-30 13:15:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.53,},NodeAddress{Type:ExternalIP,Address:34.145.0.223,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-1-mm7j.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-1-mm7j.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ae2a11e0059c5fa7bcad4e6bceb44819,SystemUUID:ae2a11e0-059c-5fa7-bcad-4e6bceb44819,BootID:27318124-2ed5-493a-91c2-cb421fd44484,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 13:17:15.359: INFO: Logging kubelet events for node ca-minion-group-1-mm7j Nov 30 13:17:15.404: INFO: Logging pods the kubelet thinks is on node ca-minion-group-1-mm7j Nov 30 13:17:15.564: INFO: some-pod-bst5n started at 2022-11-30 13:17:05 +0000 UTC (0+1 container statuses recorded) Nov 30 13:17:15.564: INFO: Container some-pod ready: true, restart count 0 Nov 30 13:17:15.564: INFO: kube-proxy-ca-minion-group-1-mm7j started at 2022-11-30 13:15:02 +0000 UTC (0+1 container statuses recorded) Nov 30 13:17:15.564: INFO: Container kube-proxy ready: true, restart count 0 Nov 30 13:17:15.564: INFO: metadata-proxy-v0.1-g8lnd started at 2022-11-30 13:15:03 +0000 UTC (0+2 container statuses recorded) Nov 30 13:17:15.564: INFO: Container metadata-proxy ready: true, restart count 0 Nov 30 13:17:15.564: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 30 13:17:15.564: INFO: konnectivity-agent-sv8wx started at 2022-11-30 13:15:09 +0000 UTC (0+1 container statuses recorded) Nov 30 13:17:15.564: INFO: Container konnectivity-agent ready: true, restart count 0 Nov 30 13:17:15.745: INFO: Latency metrics for node ca-minion-group-1-mm7j Nov 30 13:17:15.745: INFO: Logging node info for node ca-minion-group-wp8h Nov 30 13:17:15.788: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-wp8h cabb7ed6-6a4e-4a14-a7cb-07ef65191e0f 50925 0 2022-11-30 09:09:55 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-wp8h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-30 09:09:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-30 09:10:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.7.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-30 09:10:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-30 13:13:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-30 13:15:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.7.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-wp8h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.7.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-30 13:15:29 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-30 13:15:29 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-30 13:15:29 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-30 13:15:29 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-30 13:15:29 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-30 13:15:29 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-30 13:15:29 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 09:10:06 +0000 UTC,LastTransitionTime:2022-11-30 09:10:06 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 13:13:48 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 13:13:48 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 13:13:48 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 13:13:48 +0000 UTC,LastTransitionTime:2022-11-30 09:09:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.8,},NodeAddress{Type:ExternalIP,Address:34.168.80.138,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-wp8h.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-wp8h.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b506b63fb01c6040e71588bca8be6fdd,SystemUUID:b506b63f-b01c-6040-e715-88bca8be6fdd,BootID:bd2be204-29ef-43fe-9f42-c8f31fa19831,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 13:17:15.788: INFO: Logging kubelet events for node ca-minion-group-wp8h Nov 30 13:17:15.833: INFO: Logging pods the kubelet thinks is on node ca-minion-group-wp8h Nov 30 13:17:15.882: INFO: l7-default-backend-8549d69d99-sn7gr started at 2022-11-30 09:38:58 +0000 UTC (0+1 container statuses recorded) Nov 30 13:17:15.882: INFO: Container default-http-backend ready: true, restart count 0 Nov 30 13:17:15.882: INFO: metadata-proxy-v0.1-kx6wg started at 2022-11-30 09:09:56 +0000 UTC (0+2 container statuses recorded) Nov 30 13:17:15.882: INFO: Container metadata-proxy ready: true, restart count 0 Nov 30 13:17:15.882: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 30 13:17:15.882: INFO: konnectivity-agent-hh8bs started at 2022-11-30 09:10:06 +0000 UTC (0+1 container statuses recorded) Nov 30 13:17:15.882: INFO: Container konnectivity-agent ready: true, restart count 0 Nov 30 13:17:15.882: INFO: kube-proxy-ca-minion-group-wp8h started at 2022-11-30 09:09:56 +0000 UTC (0+1 container statuses recorded) Nov 30 13:17:15.882: INFO: Container kube-proxy ready: true, restart count 0 Nov 30 13:17:15.882: INFO: volume-snapshot-controller-0 started at 2022-11-30 10:04:18 +0000 UTC (0+1 container statuses recorded) Nov 30 13:17:15.882: INFO: Container volume-snapshot-controller ready: true, restart count 0 Nov 30 13:17:15.882: INFO: coredns-6d97d5ddb-fwzcx started at 2022-11-30 09:38:58 +0000 UTC (0+1 container statuses recorded) Nov 30 13:17:15.882: INFO: Container coredns ready: true, restart count 0 Nov 30 13:17:15.882: INFO: some-pod-bt74k started at 2022-11-30 13:14:06 +0000 UTC (0+1 container statuses recorded) Nov 30 13:17:15.882: INFO: Container some-pod ready: true, restart count 0 Nov 30 13:17:15.882: INFO: metrics-server-v0.5.2-867b8754b9-4qcz5 started at 2022-11-30 09:33:57 +0000 UTC (0+2 container statuses recorded) Nov 30 13:17:15.882: INFO: Container metrics-server ready: true, restart count 0 Nov 30 13:17:15.882: INFO: Container metrics-server-nanny ready: true, restart count 0 Nov 30 13:17:16.088: INFO: Latency metrics for node ca-minion-group-wp8h [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "autoscaling-8443" for this suite. 11/30/22 13:17:16.089
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\sCluster\ssize\sautoscaling\s\[Slow\]\sshould\sscale\sdown\swhen\sexpendable\spod\sis\srunning\s\[Feature\:ClusterSizeAutoscalingScaleDown\]$'
test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 +0x1bcfrom junit_01.xml
[BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/30/22 11:08:41.308 Nov 30 11:08:41.309: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename autoscaling 11/30/22 11:08:41.311 STEP: Waiting for a default service account to be provisioned in namespace 11/30/22 11:08:41.447 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/30/22 11:08:41.528 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:103 STEP: Initial size of ca-minion-group-1: 1 11/30/22 11:08:45.357 STEP: Initial size of ca-minion-group: 1 11/30/22 11:08:48.871 Nov 30 11:08:48.916: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Initial number of schedulable nodes: 2 11/30/22 11:08:48.96 [It] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] test/e2e/autoscaling/cluster_size_autoscaling.go:985 STEP: Manually increase cluster size 11/30/22 11:08:49.051 STEP: Setting size of ca-minion-group-1 to 3 11/30/22 11:08:52.5 Nov 30 11:08:52.501: INFO: Skipping dumping logs from cluster Nov 30 11:08:57.078: INFO: Skipping dumping logs from cluster STEP: Setting size of ca-minion-group to 3 11/30/22 11:09:00.46 Nov 30 11:09:00.460: INFO: Skipping dumping logs from cluster Nov 30 11:09:05.192: INFO: Skipping dumping logs from cluster I1130 11:09:11.940354 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 11:09:38.823697 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 11:10:06.283556 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 2, not ready nodes 0 I1130 11:10:26.333872 8016 cluster_size_autoscaling.go:1381] Cluster has reached the desired size STEP: Running RC which reserves 30252 MB of memory 11/30/22 11:10:26.333 STEP: creating replication controller memory-reservation in namespace autoscaling-469 11/30/22 11:10:26.334 I1130 11:10:26.476232 8016 runners.go:193] Created replication controller with name: memory-reservation, namespace: autoscaling-469, replica count: 6 I1130 11:10:36.527437 8016 runners.go:193] memory-reservation Pods: 6 out of 6 created, 6 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: Waiting for scale down 11/30/22 11:10:36.527 I1130 11:10:36.579557 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1130 11:10:56.633030 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1130 11:11:16.685384 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1130 11:11:36.772226 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1130 11:11:56.821575 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1130 11:12:16.873240 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1130 11:12:36.926411 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1130 11:12:56.975937 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1130 11:13:17.024506 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 I1130 11:13:37.140324 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m7.654s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 5m0.002s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 3m12.436s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:13:57.189059 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m27.657s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 5m20.004s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 3m32.438s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:14:17.238416 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 5m47.659s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 5m40.006s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 3m52.44s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:14:37.289359 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m7.66s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 6m0.007s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 4m12.441s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep, 2 minutes] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc004c8ef38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:14:57.339642 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m27.66s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 6m20.008s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 4m32.442s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:15:17.388328 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 6m47.662s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 6m40.01s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 4m52.444s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:15:37.437641 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m7.665s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 7m0.012s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 5m12.446s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:15:57.487810 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m27.666s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 7m20.014s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 5m32.448s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:16:17.537308 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 7m47.669s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 7m40.017s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 5m52.45s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:16:37.587745 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m7.671s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 8m0.019s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 6m12.453s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep, 2 minutes] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc004c8ef38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 11:16:57.639: INFO: Condition Ready of node ca-minion-group-1-zl6v is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669806836 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669806967 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 11:16:46 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 11:16:51 +0000 UTC}]. Failure I1130 11:16:57.639512 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m27.688s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 8m20.036s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 6m32.469s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc00127ff38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 11:17:17.689: INFO: Condition Ready of node ca-minion-group-1-zl6v is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669806836 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669806967 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 11:16:46 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 11:16:51 +0000 UTC}]. Failure I1130 11:17:17.689892 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 8m47.693s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 8m40.04s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 6m52.474s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc00127ff38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 11:17:37.746: INFO: Condition Ready of node ca-minion-group-1-zl6v is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669806836 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669806967 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 11:16:46 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 11:16:51 +0000 UTC}]. Failure I1130 11:17:37.746828 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m7.696s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 9m0.043s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 7m12.477s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc00127ff38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 11:17:57.803: INFO: Condition Ready of node ca-minion-group-1-zl6v is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669806836 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669806967 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 11:16:46 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 11:16:51 +0000 UTC}]. Failure I1130 11:17:57.803864 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m27.7s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 9m20.048s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 7m32.482s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc00127ff38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 11:18:17.867: INFO: Condition Ready of node ca-minion-group-1-zl6v is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669806836 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669806967 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 11:16:46 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 11:16:51 +0000 UTC}]. Failure I1130 11:18:17.868046 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 6, not ready nodes 1 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 9m47.707s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 9m40.055s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 7m52.488s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc00127ff38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:18:37.919188 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m7.709s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 10m0.057s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 8m12.491s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep, 2 minutes] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a29f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:18:57.970921 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m27.713s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 10m20.061s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 8m32.494s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a29f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:19:18.018662 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 10m47.716s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 10m40.064s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 8m52.497s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a29f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:19:38.066419 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m7.718s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 11m0.066s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 9m12.499s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a29f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:19:58.113382 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m27.719s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 11m20.067s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 9m32.5s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a29f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:20:18.161564 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 11m47.721s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 11m40.069s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 9m52.503s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a29f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:20:38.209640 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m7.722s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 12m0.07s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 10m12.504s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep, 2 minutes] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc004c8cf38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:20:58.257761 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m27.725s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 12m20.072s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 10m32.506s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 11:21:18.307: INFO: Condition Ready of node ca-minion-group-1-6s9n is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669806836 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669807219 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 11:21:01 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 11:21:06 +0000 UTC}]. Failure Nov 30 11:21:18.307: INFO: Condition Ready of node ca-minion-group-1-gxs4 is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669806836 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669807219 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 11:21:01 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 11:21:16 +0000 UTC}]. Failure Nov 30 11:21:18.307: INFO: Condition Ready of node ca-minion-group-mrqh is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. I1130 11:21:18.307631 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 3 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 12m47.726s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 12m40.074s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 10m52.508s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc002191f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 11:21:38.356: INFO: Condition Ready of node ca-minion-group-1-6s9n is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669806836 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669807219 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 11:21:01 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 11:21:06 +0000 UTC}]. Failure Nov 30 11:21:38.356: INFO: Condition Ready of node ca-minion-group-1-gxs4 is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669806836 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669807219 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 11:21:01 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 11:21:16 +0000 UTC}]. Failure Nov 30 11:21:38.356: INFO: Condition Ready of node ca-minion-group-mrqh is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 11:21:38.356: INFO: Condition Ready of node ca-minion-group-zm08 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. I1130 11:21:38.356409 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 4 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m7.728s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 13m0.075s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 11m12.509s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc002191f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 11:21:58.404: INFO: Condition Ready of node ca-minion-group-1-6s9n is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669806836 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669807219 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 11:21:01 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 11:21:06 +0000 UTC}]. Failure Nov 30 11:21:58.404: INFO: Condition Ready of node ca-minion-group-1-gxs4 is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669806836 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669807219 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 11:21:01 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 11:21:16 +0000 UTC}]. Failure Nov 30 11:21:58.404: INFO: Condition Ready of node ca-minion-group-mrqh is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 11:21:58.404: INFO: Condition Ready of node ca-minion-group-zm08 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. I1130 11:21:58.404594 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 4 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m27.729s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 13m20.076s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 11m32.51s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc002191f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 11:22:18.452: INFO: Condition Ready of node ca-minion-group-1-6s9n is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669806836 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669807219 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 11:21:01 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 11:21:06 +0000 UTC}]. Failure Nov 30 11:22:18.452: INFO: Condition Ready of node ca-minion-group-1-gxs4 is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669806836 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669807219 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 11:21:01 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 11:21:16 +0000 UTC}]. Failure Nov 30 11:22:18.452: INFO: Condition Ready of node ca-minion-group-mrqh is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 11:22:18.452: INFO: Condition Ready of node ca-minion-group-zm08 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. I1130 11:22:18.452673 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 5, not ready nodes 4 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 13m47.73s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 13m40.078s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 11m52.511s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc002191f38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 11:22:38.538: INFO: Condition Ready of node ca-minion-group-1-gxs4 is false, but Node is tainted by NodeController with [{DeletionCandidateOfClusterAutoscaler 1669806836 PreferNoSchedule <nil>} {ToBeDeletedByClusterAutoscaler 1669807219 NoSchedule <nil>} {node.kubernetes.io/unreachable NoSchedule 2022-11-30 11:21:01 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-30 11:21:16 +0000 UTC}]. Failure Nov 30 11:22:38.538: INFO: Condition Ready of node ca-minion-group-mrqh is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Nov 30 11:22:38.538: INFO: Condition Ready of node ca-minion-group-zm08 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. I1130 11:22:38.538346 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 4, not ready nodes 3 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m7.731s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 14m0.078s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 12m12.512s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep, 2 minutes] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc00365bf38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:22:58.583455 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m27.733s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 14m20.08s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 12m32.514s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc00365bf38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:23:18.628052 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 14m47.735s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 14m40.083s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 12m52.516s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc00365bf38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:23:38.751759 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m7.736s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 15m0.083s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 13m12.517s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc00365bf38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:23:58.795646 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m27.737s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 15m20.085s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 13m32.518s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc00365bf38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:24:18.839004 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 15m47.738s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 15m40.086s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 13m52.52s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc00365bf38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:24:38.883111 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m7.739s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 16m0.087s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 14m12.521s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep, 2 minutes] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc004c8ef38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:24:58.928437 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m27.742s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 16m20.089s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 14m32.523s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:25:18.973547 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 16m47.743s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 16m40.091s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 14m52.524s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:25:39.019915 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m7.745s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 17m0.093s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 15m12.526s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:25:59.064155 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m27.746s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 17m20.094s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 15m32.527s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:26:19.111073 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 17m47.749s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 17m40.096s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 15m52.53s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:26:39.157636 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m7.75s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 18m0.097s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 16m12.531s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:26:59.204838 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m27.754s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 18m20.101s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 16m32.535s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:27:19.252698 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 18m47.755s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 18m40.102s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 16m52.536s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:27:39.297209 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m7.756s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 19m0.103s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 17m12.537s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:27:59.342172 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m27.757s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 19m20.105s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 17m32.538s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:28:19.387383 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 19m47.758s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 19m40.106s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 17m52.539s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:28:39.434614 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 20m7.761s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 20m0.108s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 18m12.542s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:28:59.481841 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 20m27.762s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 20m20.109s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 18m32.543s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:29:19.528394 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 20m47.763s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 20m40.11s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 18m52.544s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:29:39.574472 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 21m7.765s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 21m0.112s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 19m12.546s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:29:59.619209 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 21m27.767s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 21m20.114s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 19m32.548s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ I1130 11:30:19.662884 8016 cluster_size_autoscaling.go:1384] Waiting for cluster with func, current size 1, not ready nodes 0 ------------------------------ Automatically polling progress: [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown] (Spec Runtime: 21m47.77s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 In [It] (Node Runtime: 21m40.117s) test/e2e/autoscaling/cluster_size_autoscaling.go:985 At [By Step] Waiting for scale down (Step Runtime: 19m52.551s) test/e2e/autoscaling/cluster_size_autoscaling.go:991 Spec Goroutine goroutine 4301 [sleep] time.Sleep(0x4a817c800) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFuncWithUnready({0x801de88, 0xc003f571e0}, 0xc000a2df38, 0x1176592e000, 0x0) test/e2e/autoscaling/cluster_size_autoscaling.go:1364 > k8s.io/kubernetes/test/e2e/autoscaling.WaitForClusterSizeFunc(...) test/e2e/autoscaling/cluster_size_autoscaling.go:1359 > k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000951b00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 30 11:30:39.663: INFO: Unexpected error: <*errors.errorString | 0xc000ed7870>: { s: "timeout waiting 20m0s for appropriate cluster size", } Nov 30 11:30:39.663: FAIL: timeout waiting 20m0s for appropriate cluster size Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.glob..func3.36() test/e2e/autoscaling/cluster_size_autoscaling.go:992 +0x1bc STEP: deleting ReplicationController memory-reservation in namespace autoscaling-469, will wait for the garbage collector to delete the pods 11/30/22 11:30:39.663 Nov 30 11:30:39.805: INFO: Deleting ReplicationController memory-reservation took: 45.779125ms Nov 30 11:30:40.005: INFO: Terminating ReplicationController memory-reservation pods took: 200.110612ms [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/node/init/init.go:32 Nov 30 11:30:40.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/autoscaling/cluster_size_autoscaling.go:139 STEP: Restoring initial size of the cluster 11/30/22 11:30:40.741 STEP: Setting size of ca-minion-group-1 to 1 11/30/22 11:30:49.399 Nov 30 11:30:49.399: INFO: Skipping dumping logs from cluster Nov 30 11:30:54.193: INFO: Skipping dumping logs from cluster Nov 30 11:30:54.242: INFO: Waiting for ready nodes 2, current ready 1, not ready nodes 0 Nov 30 11:31:14.287: INFO: Waiting for ready nodes 2, current ready 1, not ready nodes 0 Nov 30 11:31:34.332: INFO: Waiting for ready nodes 2, current ready 1, not ready nodes 0 Nov 30 11:31:54.385: INFO: Waiting for ready nodes 2, current ready 1, not ready nodes 0 Nov 30 11:32:14.432: INFO: Condition NetworkUnavailable of node ca-minion-group-1-3rkr is true instead of false. Reason: NoRouteCreated, message: Node created without a route Nov 30 11:32:14.432: INFO: Waiting for ready nodes 2, current ready 1, not ready nodes 1 Nov 30 11:32:34.481: INFO: Cluster has reached the desired number of ready nodes 2 STEP: Remove taint from node ca-master 11/30/22 11:32:34.526 STEP: Remove taint from node ca-minion-group-1-3rkr 11/30/22 11:32:34.57 STEP: Remove taint from node ca-minion-group-wp8h 11/30/22 11:32:34.613 I1130 11:32:34.656059 8016 cluster_size_autoscaling.go:165] Made nodes schedulable again in 129.801626ms [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/30/22 11:32:34.656 STEP: Collecting events from namespace "autoscaling-469". 11/30/22 11:32:34.656 STEP: Found 57 events. 11/30/22 11:32:34.703 Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:26 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-6tkpn Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:26 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-2hjsh Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:26 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-24f78 Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:26 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-v4pf2 Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:26 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-wpmmp Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:26 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-cjbrg Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:26 +0000 UTC - event for memory-reservation-24f78: {default-scheduler } FailedScheduling: 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 3 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }, 4 Insufficient memory. preemption: 0/7 nodes are available: 3 No preemption victims found for incoming pod, 4 Preemption is not helpful for scheduling.. Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:26 +0000 UTC - event for memory-reservation-2hjsh: {default-scheduler } Scheduled: Successfully assigned autoscaling-469/memory-reservation-2hjsh to ca-minion-group-1-zl6v Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:26 +0000 UTC - event for memory-reservation-6tkpn: {default-scheduler } FailedScheduling: 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 3 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }, 4 Insufficient memory. preemption: 0/7 nodes are available: 3 No preemption victims found for incoming pod, 4 Preemption is not helpful for scheduling.. Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:26 +0000 UTC - event for memory-reservation-cjbrg: {default-scheduler } Scheduled: Successfully assigned autoscaling-469/memory-reservation-cjbrg to ca-minion-group-wp8h Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:26 +0000 UTC - event for memory-reservation-v4pf2: {default-scheduler } Scheduled: Successfully assigned autoscaling-469/memory-reservation-v4pf2 to ca-minion-group-1-6s9n Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:26 +0000 UTC - event for memory-reservation-wpmmp: {default-scheduler } FailedScheduling: 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 3 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }, 4 Insufficient memory. preemption: 0/7 nodes are available: 3 No preemption victims found for incoming pod, 4 Preemption is not helpful for scheduling.. Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:27 +0000 UTC - event for memory-reservation-24f78: {default-scheduler } Scheduled: Successfully assigned autoscaling-469/memory-reservation-24f78 to ca-minion-group-mrqh Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:27 +0000 UTC - event for memory-reservation-6tkpn: {default-scheduler } Scheduled: Successfully assigned autoscaling-469/memory-reservation-6tkpn to ca-minion-group-1-gxs4 Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:27 +0000 UTC - event for memory-reservation-cjbrg: {kubelet ca-minion-group-wp8h} Started: Started container memory-reservation Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:27 +0000 UTC - event for memory-reservation-cjbrg: {kubelet ca-minion-group-wp8h} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:27 +0000 UTC - event for memory-reservation-cjbrg: {kubelet ca-minion-group-wp8h} Created: Created container memory-reservation Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:27 +0000 UTC - event for memory-reservation-v4pf2: {kubelet ca-minion-group-1-6s9n} Started: Started container memory-reservation Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:27 +0000 UTC - event for memory-reservation-v4pf2: {kubelet ca-minion-group-1-6s9n} Created: Created container memory-reservation Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:27 +0000 UTC - event for memory-reservation-v4pf2: {kubelet ca-minion-group-1-6s9n} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:28 +0000 UTC - event for memory-reservation-24f78: {kubelet ca-minion-group-mrqh} Started: Started container memory-reservation Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:28 +0000 UTC - event for memory-reservation-24f78: {kubelet ca-minion-group-mrqh} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:28 +0000 UTC - event for memory-reservation-24f78: {kubelet ca-minion-group-mrqh} Created: Created container memory-reservation Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:28 +0000 UTC - event for memory-reservation-2hjsh: {kubelet ca-minion-group-1-zl6v} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:28 +0000 UTC - event for memory-reservation-2hjsh: {kubelet ca-minion-group-1-zl6v} Started: Started container memory-reservation Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:28 +0000 UTC - event for memory-reservation-2hjsh: {kubelet ca-minion-group-1-zl6v} Created: Created container memory-reservation Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:28 +0000 UTC - event for memory-reservation-6tkpn: {kubelet ca-minion-group-1-gxs4} Started: Started container memory-reservation Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:28 +0000 UTC - event for memory-reservation-6tkpn: {kubelet ca-minion-group-1-gxs4} Created: Created container memory-reservation Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:28 +0000 UTC - event for memory-reservation-6tkpn: {kubelet ca-minion-group-1-gxs4} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:28 +0000 UTC - event for memory-reservation-wpmmp: {default-scheduler } Scheduled: Successfully assigned autoscaling-469/memory-reservation-wpmmp to ca-minion-group-zm08 Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:29 +0000 UTC - event for memory-reservation-wpmmp: {kubelet ca-minion-group-zm08} Created: Created container memory-reservation Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:29 +0000 UTC - event for memory-reservation-wpmmp: {kubelet ca-minion-group-zm08} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Nov 30 11:32:34.703: INFO: At 2022-11-30 11:10:29 +0000 UTC - event for memory-reservation-wpmmp: {kubelet ca-minion-group-zm08} Started: Started container memory-reservation Nov 30 11:32:34.703: INFO: At 2022-11-30 11:16:46 +0000 UTC - event for memory-reservation-2hjsh: {node-controller } NodeNotReady: Node is not ready Nov 30 11:32:34.703: INFO: At 2022-11-30 11:19:21 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-l5p5r Nov 30 11:32:34.703: INFO: At 2022-11-30 11:19:21 +0000 UTC - event for memory-reservation-2hjsh: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod autoscaling-469/memory-reservation-2hjsh Nov 30 11:32:34.703: INFO: At 2022-11-30 11:19:21 +0000 UTC - event for memory-reservation-l5p5r: {default-scheduler } FailedScheduling: 0/6 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 6 Insufficient memory. preemption: 0/6 nodes are available: 1 Preemption is not helpful for scheduling, 5 No preemption victims found for incoming pod.. Nov 30 11:32:34.703: INFO: At 2022-11-30 11:21:01 +0000 UTC - event for memory-reservation-6tkpn: {node-controller } NodeNotReady: Node is not ready Nov 30 11:32:34.703: INFO: At 2022-11-30 11:21:01 +0000 UTC - event for memory-reservation-v4pf2: {node-controller } NodeNotReady: Node is not ready Nov 30 11:32:34.703: INFO: At 2022-11-30 11:21:11 +0000 UTC - event for memory-reservation-24f78: {node-controller } NodeNotReady: Node is not ready Nov 30 11:32:34.703: INFO: At 2022-11-30 11:21:21 +0000 UTC - event for memory-reservation-wpmmp: {node-controller } NodeNotReady: Node is not ready Nov 30 11:32:34.703: INFO: At 2022-11-30 11:23:21 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: (combined from similar events): Created pod: memory-reservation-rzl9v Nov 30 11:32:34.703: INFO: At 2022-11-30 11:23:21 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-qhrq8 Nov 30 11:32:34.703: INFO: At 2022-11-30 11:23:21 +0000 UTC - event for memory-reservation: {replication-controller } SuccessfulCreate: Created pod: memory-reservation-zwkdz Nov 30 11:32:34.703: INFO: At 2022-11-30 11:23:21 +0000 UTC - event for memory-reservation-6tkpn: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod autoscaling-469/memory-reservation-6tkpn Nov 30 11:32:34.703: INFO: At 2022-11-30 11:23:21 +0000 UTC - event for memory-reservation-l5p5r: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 2 Insufficient memory. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.. Nov 30 11:32:34.703: INFO: At 2022-11-30 11:23:21 +0000 UTC - event for memory-reservation-qhrq8: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 2 Insufficient memory. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.. Nov 30 11:32:34.703: INFO: At 2022-11-30 11:23:21 +0000 UTC - event for memory-reservation-v4pf2: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod autoscaling-469/memory-reservation-v4pf2 Nov 30 11:32:34.703: INFO: At 2022-11-30 11:23:21 +0000 UTC - event for memory-reservation-vkr58: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 2 Insufficient memory. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.. Nov 30 11:32:34.703: INFO: At 2022-11-30 11:23:21 +0000 UTC - event for memory-reservation-zwkdz: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 2 Insufficient memory. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.. Nov 30 11:32:34.703: INFO: At 2022-11-30 11:23:41 +0000 UTC - event for memory-reservation-rzl9v: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) were unschedulable, 2 Insufficient memory. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.. Nov 30 11:32:34.703: INFO: At 2022-11-30 11:30:39 +0000 UTC - event for memory-reservation-cjbrg: {kubelet ca-minion-group-wp8h} Killing: Stopping container memory-reservation Nov 30 11:32:34.703: INFO: At 2022-11-30 11:30:39 +0000 UTC - event for memory-reservation-l5p5r: {default-scheduler } FailedScheduling: skip schedule deleting pod: autoscaling-469/memory-reservation-l5p5r Nov 30 11:32:34.703: INFO: At 2022-11-30 11:30:39 +0000 UTC - event for memory-reservation-qhrq8: {default-scheduler } FailedScheduling: skip schedule deleting pod: autoscaling-469/memory-reservation-qhrq8 Nov 30 11:32:34.703: INFO: At 2022-11-30 11:30:39 +0000 UTC - event for memory-reservation-rzl9v: {default-scheduler } FailedScheduling: skip schedule deleting pod: autoscaling-469/memory-reservation-rzl9v Nov 30 11:32:34.703: INFO: At 2022-11-30 11:30:39 +0000 UTC - event for memory-reservation-vkr58: {default-scheduler } FailedScheduling: skip schedule deleting pod: autoscaling-469/memory-reservation-vkr58 Nov 30 11:32:34.703: INFO: At 2022-11-30 11:30:39 +0000 UTC - event for memory-reservation-zwkdz: {default-scheduler } FailedScheduling: skip schedule deleting pod: autoscaling-469/memory-reservation-zwkdz Nov 30 11:32:34.745: INFO: POD NODE PHASE GRACE CONDITIONS Nov 30 11:32:34.745: INFO: Nov 30 11:32:34.789: INFO: Logging node info for node ca-master Nov 30 11:32:34.832: INFO: Node Info: &Node{ObjectMeta:{ca-master 2a25a1e5-76d1-4d88-8f78-b63dca9ba016 31345 0 2022-11-30 08:55:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-30 08:55:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-30 08:55:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-30 08:56:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-30 11:29:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 08:55:56 +0000 UTC,LastTransitionTime:2022-11-30 08:55:56 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 11:29:38 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 11:29:38 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 11:29:38 +0000 UTC,LastTransitionTime:2022-11-30 08:55:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 11:29:38 +0000 UTC,LastTransitionTime:2022-11-30 08:56:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.76.149,},NodeAddress{Type:InternalDNS,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-master.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bac070a384fefb9133ee7878e15673cf,SystemUUID:bac070a3-84fe-fb91-33ee-7878e15673cf,BootID:fe19ddf9-af1e-416e-a389-0ed6e929f60e,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/autoscaling/cluster-autoscaler@sha256:07ab8c89cd0ad296ddb6347febe196d8fe0e1c757656a98f71199860d87cf1a5 registry.k8s.io/autoscaling/cluster-autoscaler:v1.22.0],SizeBytes:24220268,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 11:32:34.833: INFO: Logging kubelet events for node ca-master Nov 30 11:32:34.879: INFO: Logging pods the kubelet thinks is on node ca-master Nov 30 11:32:34.946: INFO: kube-apiserver-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 11:32:34.946: INFO: Container kube-apiserver ready: true, restart count 0 Nov 30 11:32:34.946: INFO: kube-controller-manager-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 11:32:34.946: INFO: Container kube-controller-manager ready: true, restart count 1 Nov 30 11:32:34.946: INFO: kube-scheduler-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 11:32:34.946: INFO: Container kube-scheduler ready: true, restart count 0 Nov 30 11:32:34.946: INFO: etcd-server-events-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 11:32:34.946: INFO: Container etcd-container ready: true, restart count 0 Nov 30 11:32:34.946: INFO: etcd-server-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 11:32:34.946: INFO: Container etcd-container ready: true, restart count 0 Nov 30 11:32:34.946: INFO: cluster-autoscaler-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 11:32:34.946: INFO: Container cluster-autoscaler ready: true, restart count 2 Nov 30 11:32:34.946: INFO: l7-lb-controller-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 11:32:34.946: INFO: Container l7-lb-controller ready: true, restart count 2 Nov 30 11:32:34.946: INFO: konnectivity-server-ca-master started at 2022-11-30 08:55:00 +0000 UTC (0+1 container statuses recorded) Nov 30 11:32:34.946: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 30 11:32:34.946: INFO: kube-addon-manager-ca-master started at 2022-11-30 08:55:18 +0000 UTC (0+1 container statuses recorded) Nov 30 11:32:34.946: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 30 11:32:34.946: INFO: metadata-proxy-v0.1-vp7mp started at 2022-11-30 08:56:16 +0000 UTC (0+2 container statuses recorded) Nov 30 11:32:34.946: INFO: Container metadata-proxy ready: true, restart count 0 Nov 30 11:32:34.946: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 30 11:32:35.142: INFO: Latency metrics for node ca-master Nov 30 11:32:35.143: INFO: Logging node info for node ca-minion-group-1-3rkr Nov 30 11:32:35.186: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-1-3rkr 03ad1220-33da-4e08-a355-92671ad7532c 31834 0 2022-11-30 11:32:02 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-1-3rkr kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-30 11:32:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-30 11:32:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-30 11:32:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.31.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-30 11:32:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-30 11:32:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.31.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-1-3rkr,Unschedulable:false,Taints:[]Taint{Taint{Key:DeletionCandidateOfClusterAutoscaler,Value:1669807925,Effect:PreferNoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.31.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-30 11:32:07 +0000 UTC,LastTransitionTime:2022-11-30 11:32:06 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-30 11:32:07 +0000 UTC,LastTransitionTime:2022-11-30 11:32:06 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-30 11:32:07 +0000 UTC,LastTransitionTime:2022-11-30 11:32:06 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-30 11:32:07 +0000 UTC,LastTransitionTime:2022-11-30 11:32:06 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-30 11:32:07 +0000 UTC,LastTransitionTime:2022-11-30 11:32:06 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-30 11:32:07 +0000 UTC,LastTransitionTime:2022-11-30 11:32:06 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-30 11:32:07 +0000 UTC,LastTransitionTime:2022-11-30 11:32:06 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 11:32:15 +0000 UTC,LastTransitionTime:2022-11-30 11:32:15 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 11:32:32 +0000 UTC,LastTransitionTime:2022-11-30 11:32:02 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 11:32:32 +0000 UTC,LastTransitionTime:2022-11-30 11:32:02 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 11:32:32 +0000 UTC,LastTransitionTime:2022-11-30 11:32:02 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 11:32:32 +0000 UTC,LastTransitionTime:2022-11-30 11:32:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.33,},NodeAddress{Type:ExternalIP,Address:34.82.102.154,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-1-3rkr.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-1-3rkr.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:425e60bb57a1b80c6d89b37b11be9124,SystemUUID:425e60bb-57a1-b80c-6d89-b37b11be9124,BootID:ae6d707e-fde5-4cfb-b692-6f39ca9d135d,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 11:32:35.186: INFO: Logging kubelet events for node ca-minion-group-1-3rkr Nov 30 11:32:35.231: INFO: Logging pods the kubelet thinks is on node ca-minion-group-1-3rkr Nov 30 11:32:35.295: INFO: kube-proxy-ca-minion-group-1-3rkr started at 2022-11-30 11:32:02 +0000 UTC (0+1 container statuses recorded) Nov 30 11:32:35.295: INFO: Container kube-proxy ready: true, restart count 0 Nov 30 11:32:35.295: INFO: metadata-proxy-v0.1-n8txw started at 2022-11-30 11:32:03 +0000 UTC (0+2 container statuses recorded) Nov 30 11:32:35.295: INFO: Container metadata-proxy ready: true, restart count 0 Nov 30 11:32:35.295: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 30 11:32:35.295: INFO: konnectivity-agent-ll899 started at 2022-11-30 11:32:15 +0000 UTC (0+1 container statuses recorded) Nov 30 11:32:35.295: INFO: Container konnectivity-agent ready: true, restart count 0 Nov 30 11:32:35.468: INFO: Latency metrics for node ca-minion-group-1-3rkr Nov 30 11:32:35.468: INFO: Logging node info for node ca-minion-group-wp8h Nov 30 11:32:35.513: INFO: Node Info: &Node{ObjectMeta:{ca-minion-group-wp8h cabb7ed6-6a4e-4a14-a7cb-07ef65191e0f 31685 0 2022-11-30 09:09:55 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:ca-minion-group-wp8h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-30 09:09:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-30 09:10:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.7.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-30 09:10:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-30 11:30:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-30 11:31:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.7.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-autoscaling-migs/us-west1-b/ca-minion-group-wp8h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.7.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-30 11:30:13 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-30 11:30:13 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-30 11:30:13 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-30 11:30:13 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-30 11:30:13 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-30 11:30:13 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-30 11:30:13 +0000 UTC,LastTransitionTime:2022-11-30 09:09:59 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-30 09:10:06 +0000 UTC,LastTransitionTime:2022-11-30 09:10:06 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-30 11:31:47 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-30 11:31:47 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-30 11:31:47 +0000 UTC,LastTransitionTime:2022-11-30 09:09:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-30 11:31:47 +0000 UTC,LastTransitionTime:2022-11-30 09:09:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.8,},NodeAddress{Type:ExternalIP,Address:34.168.80.138,},NodeAddress{Type:InternalDNS,Address:ca-minion-group-wp8h.c.k8s-jkns-gci-autoscaling-migs.internal,},NodeAddress{Type:Hostname,Address:ca-minion-group-wp8h.c.k8s-jkns-gci-autoscaling-migs.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b506b63fb01c6040e71588bca8be6fdd,SystemUUID:b506b63f-b01c-6040-e715-88bca8be6fdd,BootID:bd2be204-29ef-43fe-9f42-c8f31fa19831,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.6.6,KubeletVersion:v1.27.0-alpha.0.53+d98e9620e37995,KubeProxyVersion:v1.27.0-alpha.0.53+d98e9620e37995,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.53_d98e9620e37995],SizeBytes:67201736,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 30 11:32:35.514: INFO: Logging kubelet events for node ca-minion-group-wp8h Nov 30 11:32:35.560: INFO: Logging pods the kubelet thinks is on node ca-minion-group-wp8h Nov 30 11:32:35.628: INFO: metadata-proxy-v0.1-kx6wg started at 2022-11-30 09:09:56 +0000 UTC (0+2 container statuses recorded) Nov 30 11:32:35.628: INFO: Container metadata-proxy ready: true, restart count 0 Nov 30 11:32:35.628: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 30 11:32:35.628: INFO: konnectivity-agent-hh8bs started at 2022-11-30 09:10:06 +0000 UTC (0+1 container statuses recorded) Nov 30 11:32:35.628: INFO: Container konnectivity-agent ready: true, restart count 0 Nov 30 11:32:35.628: INFO: kube-proxy-ca-minion-group-wp8h started at 2022-11-30 09:09:56 +0000 UTC (0+1 container statuses recorded) Nov 30 11:32:35.628: INFO: Container kube-proxy ready: true, restart count 0 Nov 30 11:32:35.628: INFO: volume-snapshot-controller-0 started at 2022-11-30 10:04:18 +0000 UTC (0+1 container statuses recorded) Nov 30 11:32:35.628: INFO: Container volume-snapshot-controller ready: true, restart count 0 Nov 30 11:32:35.628: INFO: coredns-6d97d5ddb-fwzcx started at 2022-11-30 09:38:58 +0000 UTC (0+1 container statuses recorded) Nov 30 11:32:35.628: INFO: Container coredns ready: true, restart count 0 Nov 30 11:32:35.628: INFO: metrics-server-v0.5.2-867b8754b9-4qcz5 started at 2022-11-30 09:33:57 +0000 UTC (0+2 container statuses recorded) Nov 30 11:32:35.628: INFO: Container metrics-server ready: true, restart count 0 Nov 30 11:32:35.628: INFO: Container metrics-server-nanny ready: true, restart count 0 Nov 30 11:32:35.628: INFO: l7-default-backend-8549d69d99-sn7gr started at 2022-11-30 09:38:58 +0000 UTC (0+1 container statuses recorded) Nov 30 11:32:35.628: INFO: Container default-http-backend ready: true, restart count 0 Nov 30 11:32:35.819: INFO: Latency metrics for node ca-minion-group-wp8h [DeferCleanup (Each)] [sig-autoscaling] Cluster size autoscaling [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "autoscaling-469" for this suite. 11/30/22 11:32:35.819
Filter through log files | View test history on testgrid
error during ./hack/e2e-internal/e2e-down.sh (interrupted): signal: interrupt
from junit_runner.xml
Filter through log files | View test history on testgrid
error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Feature:ClusterSizeAutoscalingScaleUp\]|\[Feature:ClusterSizeAutoscalingScaleDown\] --ginkgo.skip=\[Flaky\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true --cluster-ip-range=10.64.0.0/14: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest --timeout triggered
from junit_runner.xml
Filter through log files | View test history on testgrid
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [ReportAfterSuite] Kubernetes e2e suite report
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest Extract
kubetest GetDeployer
kubetest IsUp
kubetest Prepare
kubetest TearDown Previous
kubetest Up
kubetest kubectl version
kubetest list nodes
kubetest test setup
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (fairness)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (priority)
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [Flaky] kubectl explain works for CR with the same resource name as built-in object.
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST NOT fail validation for create of a custom resource that satisfies the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains a x-kubernetes-validations rule that refers to a property that do not exist
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that contains a syntax error
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that exceeds the estimated cost limit
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource that exceeds the runtime cost limit for x-kubernetes-validations rule execution
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail update of a custom resource that does not satisfy a x-kubernetes-validations transition rule
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail validation for create of a custom resource that does not satisfy the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should accurately determine present and missing resources
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply a finalizer to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply an update to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should apply changes to a resourcequota status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should manage the lifecycle of a ResourceQuota [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for CRDs
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for subresources
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [It] [sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [It] [sig-api-machinery] health handlers should contain necessary checks
Kubernetes e2e suite [It] [sig-api-machinery] kube-apiserver identity [Feature:APIServerIdentity] kube-apiserver identity should persist after restart [Disruptive]
Kubernetes e2e suite [It] [sig-api-machinery] server version should find the server version [Conformance]
Kubernetes e2e suite [It] [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should be able to schedule after more than 100 missed schedule
Kubernetes e2e suite [It] [sig-apps] CronJob should delete failed finished jobs with limit of one job
Kubernetes e2e suite [It] [sig-apps] CronJob should delete successful finished jobs with limit of one successful job
Kubernetes e2e suite [It] [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [It] [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support CronJob API operations [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support timezone
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should surge pods onto nodes when spec was updated and update strategy is RollingUpdate
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kube-proxy should recover after being killed accidentally
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [It] [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [It] [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [It] [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [It] [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
Kubernetes e2e suite [It] [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore DisruptionTarget condition
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore exit code 137
Kubernetes e2e suite [It] [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy on exit code to fail the job early
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy to not count the failure towards the backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should apply changes to a job status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete pods when suspended
Kubernetes e2e suite [It] [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [It] [sig-apps] Job should manage the lifecycle of a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should not create pods when created in suspend state
Kubernetes e2e suite [It] [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion with CPU requests [Serial]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should get and update a ReplicationController scale [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet AvailableReplicas should get updated accordingly when MinReadySeconds is enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet MinReadySeconds should be honored when enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenDeleted)
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenScaled) [Feature:StatefulSetAutoDeletePVC]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a OnScaledown policy
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a WhenDeleted policy
Kubernetes e2e suite [It] [sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [It] [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR
Kubernetes e2e suite [It] [sig-auth] SelfSubjectReview [Feature:APISelfSubjectReview] should support SelfSubjectReview API operations
Kubernetes e2e suite [It] [sig-auth] ServiceAccount admission controller migration [Feature:BoundServiceAccountTokenVolume] master upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts no secret-based service account token should be auto-generated
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should update a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) CustomResourceDefinition Should scale with a CRD targetRef
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light [Slow] Should scale from 2 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods on a busy application with an idle sidecar container
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale down
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale up
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with both scale up and down controls configured should keep recommendation within the range over two stabilization windows
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with both scale up and down controls configured should keep recommendation within the range with stabilization window and pod limit rate
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with long upscale stabilization window should scale up only after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale down no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale up no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale down no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale up no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with short downscale stabilization window should scale down soon after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down to 0
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down with Prometheus
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target average value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Container Resource and External Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Pod and Object Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Pod and External metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Resource and Object metrics)
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding Wi