This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2022-07-20 19:22
Elapsed5h15m
Revisionrelease-1.22

No Test Failures!


Error lines from build-log.txt

... skipping 95 lines ...
k8s-master-93387234-0   Ready   master   14m   v1.22.13-rc.0.1+bb6a7243193691
2022/07/20 19:46:14 process.go:153: Running: kubectl --match-server-version=false version
2022/07/20 19:46:14 process.go:155: Step 'kubectl --match-server-version=false version' finished in 266.843202ms
2022/07/20 19:46:14 process.go:153: Running: ./hack/ginkgo-e2e.sh --node-os-distro=windows --ginkgo.focus=(\[sig-windows\]|\[sig-scheduling\].SchedulerPreemption|\[sig-autoscaling\].\[Feature:HPA\]|\[sig-apps\].CronJob).*(\[Serial\]|\[Slow\])|(\[Serial\]|\[Slow\]).*(\[Conformance\]|\[NodeConformance\]) --ginkgo.skip=\[LinuxOnly\]|device.plugin.for.Windows|\[sig-scheduling\].SchedulerPredicates.\[Serial\].validates.that.there.is.no.conflict.between.pods.with.same.hostPort --report-dir=/logs/artifacts --disable-log-dump=true
Conformance test: not doing test setup.
I0720 19:46:16.782198   43227 e2e.go:129] Starting e2e run "04d71e32-ea62-454f-8462-8bbbe7d0ef44" on Ginkgo node 1
{"msg":"Test Suite starting","total":46,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1658346375 - Will randomize all specs
Will run 46 of 6442 specs

Jul 20 19:46:18.985: INFO: >>> kubeConfig: /root/tmp1415318856/kubeconfig/kubeconfig.westus2.json
... skipping 8 lines ...
Jul 20 19:46:19.765: INFO: kube-apiserver version: v1.22.13-rc.0.1+bb6a7243193691
Jul 20 19:46:19.765: INFO: >>> kubeConfig: /root/tmp1415318856/kubeconfig/kubeconfig.westus2.json
Jul 20 19:46:19.827: INFO: Cluster IP family: ipv4
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 20 19:46:19.828: INFO: >>> kubeConfig: /root/tmp1415318856/kubeconfig/kubeconfig.westus2.json
STEP: Building a namespace api object, basename daemonsets
... skipping 4 lines ...
W0720 19:46:20.274814   43227 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul 20 19:46:20.336: INFO: Found ClusterRoles; assuming RBAC is enabled.
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-6241
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142
[It] should retry creating failed daemon pods [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul 20 19:46:21.168: INFO: DaemonSet pods can't tolerate node k8s-master-93387234-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jul 20 19:46:21.229: INFO: Number of nodes with available pods: 0
Jul 20 19:46:21.229: INFO: Node 9338k8s000 is running more than one daemon pod
... skipping 84 lines ...
Jul 20 19:46:50.346: INFO: DaemonSet pods can't tolerate node k8s-master-93387234-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jul 20 19:46:50.405: INFO: Number of nodes with available pods: 2
Jul 20 19:46:50.406: INFO: Node 9338k8s010 is running more than one daemon pod
Jul 20 19:46:51.346: INFO: DaemonSet pods can't tolerate node k8s-master-93387234-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jul 20 19:46:51.405: INFO: Number of nodes with available pods: 3
Jul 20 19:46:51.405: INFO: Number of running nodes: 3, number of available pods: 3
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jul 20 19:46:51.653: INFO: DaemonSet pods can't tolerate node k8s-master-93387234-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jul 20 19:46:51.714: INFO: Number of nodes with available pods: 2
Jul 20 19:46:51.714: INFO: Node 9338k8s010 is running more than one daemon pod
Jul 20 19:46:52.835: INFO: DaemonSet pods can't tolerate node k8s-master-93387234-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jul 20 19:46:52.894: INFO: Number of nodes with available pods: 2
Jul 20 19:46:52.894: INFO: Node 9338k8s010 is running more than one daemon pod
... skipping 15 lines ...
Jul 20 19:46:58.831: INFO: DaemonSet pods can't tolerate node k8s-master-93387234-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jul 20 19:46:58.891: INFO: Number of nodes with available pods: 2
Jul 20 19:46:58.891: INFO: Node 9338k8s010 is running more than one daemon pod
Jul 20 19:46:59.829: INFO: DaemonSet pods can't tolerate node k8s-master-93387234-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jul 20 19:46:59.889: INFO: Number of nodes with available pods: 3
Jul 20 19:46:59.889: INFO: Number of running nodes: 3, number of available pods: 3
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6241, will wait for the garbage collector to delete the pods
Jul 20 19:47:00.226: INFO: Deleting DaemonSet.extensions daemon-set took: 60.673192ms
Jul 20 19:47:00.326: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.201796ms
... skipping 4 lines ...
Jul 20 19:47:06.907: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2111"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 20 19:47:07.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6241" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":46,"completed":1,"skipped":235,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment 
  Should scale from 1 pod to 3 pods and from 3 to 5
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:39
[BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)
... skipping 169 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/autoscaling/framework.go:23
  [Serial] [Slow] Deployment
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:37
    Should scale from 1 pod to 3 pods and from 3 to 5
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:39
------------------------------
{"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5","total":46,"completed":2,"skipped":320,"failed":0}
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] 
  validates lower priority pod preemption by critical pod [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 21 lines ...
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 20 19:58:27.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-4721" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78
•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":46,"completed":3,"skipped":327,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 127 lines ...
Jul 20 19:59:16.760: INFO: ss-2  9338k8s010  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 19:58:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-07-20 19:58:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-07-20 19:58:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 19:58:40 +0000 UTC  }]
Jul 20 19:59:16.760: INFO: 
Jul 20 19:59:16.760: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5542
Jul 20 19:59:17.821: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 19:59:18.497: INFO: rc: 1
Jul 20 19:59:18.497: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 19:59:28.498: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 19:59:29.172: INFO: rc: 1
Jul 20 19:59:29.173: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 19:59:39.173: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 19:59:39.790: INFO: rc: 1
Jul 20 19:59:39.790: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 19:59:49.790: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 19:59:50.452: INFO: rc: 1
Jul 20 19:59:50.452: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:00:00.452: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:00:01.088: INFO: rc: 1
Jul 20 20:00:01.088: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:00:11.089: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:00:11.687: INFO: rc: 1
Jul 20 20:00:11.687: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:00:21.688: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:00:22.323: INFO: rc: 1
Jul 20 20:00:22.323: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:00:32.323: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:00:32.965: INFO: rc: 1
Jul 20 20:00:32.965: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:00:42.966: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:00:43.608: INFO: rc: 1
Jul 20 20:00:43.608: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:00:53.608: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:00:54.225: INFO: rc: 1
Jul 20 20:00:54.225: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:01:04.225: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:01:04.875: INFO: rc: 1
Jul 20 20:01:04.875: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:01:14.878: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:01:15.522: INFO: rc: 1
Jul 20 20:01:15.522: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:01:25.523: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:01:26.141: INFO: rc: 1
Jul 20 20:01:26.141: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:01:36.142: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:01:36.800: INFO: rc: 1
Jul 20 20:01:36.800: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:01:46.801: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:01:47.457: INFO: rc: 1
Jul 20 20:01:47.458: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:01:57.458: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:01:58.101: INFO: rc: 1
Jul 20 20:01:58.101: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:02:08.102: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:02:08.727: INFO: rc: 1
Jul 20 20:02:08.727: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:02:18.728: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:02:19.321: INFO: rc: 1
Jul 20 20:02:19.321: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:02:29.323: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:02:29.903: INFO: rc: 1
Jul 20 20:02:29.904: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:02:39.906: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:02:40.539: INFO: rc: 1
Jul 20 20:02:40.539: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:02:50.543: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:02:51.176: INFO: rc: 1
Jul 20 20:02:51.176: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:03:01.178: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:03:01.765: INFO: rc: 1
Jul 20 20:03:01.765: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:03:11.765: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:03:12.392: INFO: rc: 1
Jul 20 20:03:12.392: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:03:22.393: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:03:22.988: INFO: rc: 1
Jul 20 20:03:22.988: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:03:32.988: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:03:33.635: INFO: rc: 1
Jul 20 20:03:33.635: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:03:43.637: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:03:44.271: INFO: rc: 1
Jul 20 20:03:44.271: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:03:54.273: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:03:54.887: INFO: rc: 1
Jul 20 20:03:54.887: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:04:04.887: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:04:05.530: INFO: rc: 1
Jul 20 20:04:05.530: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:04:15.533: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:04:16.153: INFO: rc: 1
Jul 20 20:04:16.153: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 20 20:04:26.154: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 20:04:26.768: INFO: rc: 1
Jul 20 20:04:26.768: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Jul 20 20:04:26.768: INFO: Scaling statefulset ss to 0
Jul 20 20:14:27.228: INFO: Waiting for statefulset status.replicas updated to 0
... skipping 56 lines ...
Jul 20 20:23:47.344: INFO: Waiting for stateful set status.replicas to become 0, currently 1
Jul 20 20:23:57.345: INFO: Waiting for stateful set status.replicas to become 0, currently 1
Jul 20 20:24:07.345: INFO: Waiting for stateful set status.replicas to become 0, currently 1
Jul 20 20:24:17.345: INFO: Waiting for stateful set status.replicas to become 0, currently 1
Jul 20 20:24:27.345: INFO: Waiting for stateful set status.replicas to become 0, currently 1
Jul 20 20:24:27.402: INFO: Waiting for stateful set status.replicas to become 0, currently 1
Jul 20 20:24:27.403: FAIL: Failed waiting for stateful set status.replicas updated to 0: timed out waiting for the condition

Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.glob..func9.2.11()
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:725 +0x81f
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000703980)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 4 lines ...
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1248 +0x2b3
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118
Jul 20 20:24:27.464: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 describe po ss-2'
Jul 20 20:24:28.317: INFO: stderr: ""
Jul 20 20:24:28.317: INFO: stdout: "Name:                      ss-2\nNamespace:                 statefulset-5542\nPriority:                  0\nNode:                      9338k8s010/10.240.0.65\nStart Time:                Wed, 20 Jul 2022 19:58:40 +0000\nLabels:                    baz=blah\n                           controller-revision-hash=ss-677d6db895\n                           foo=bar\n                           statefulset.kubernetes.io/pod-name=ss-2\nAnnotations:               kubernetes.io/psp: e2e-test-privileged-psp\nStatus:                    Terminating (lasts 24m)\nTermination Grace Period:  30s\nIP:                        10.240.0.71\nIPs:\n  IP:           10.240.0.71\nControlled By:  StatefulSet/ss\nContainers:\n  webserver:\n    Container ID:   containerd://d6544433ae18dcc63a4c91a93b80258674026d65d745da85e070c0817bd166fa\n    Image:          k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n    Image ID:       k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\n    Port:           <none>\n    Host Port:      <none>\n    State:          Terminated\n      Reason:       Completed\n      Exit Code:    0\n      Started:      Wed, 20 Jul 2022 19:58:45 +0000\n      Finished:     Wed, 20 Jul 2022 19:59:12 +0000\n    Ready:          False\n    Restart Count:  0\n    Readiness:      http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7w47f (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             False \n  ContainersReady   False \n  PodScheduled      True \nVolumes:\n  kube-api-access-7w47f:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type     Reason     Age                  From               Message\n  ----     ------     ----                 ----               -------\n  Normal   Scheduled  25m                  default-scheduler  Successfully assigned statefulset-5542/ss-2 to 9338k8s010\n  Normal   Pulled     25m                  kubelet            Container image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" already present on machine\n  Normal   Created    25m                  kubelet            Created container webserver\n  Normal   Started    25m                  kubelet            Started container webserver\n  Warning  Unhealthy  25m (x13 over 25m)   kubelet            Readiness probe failed: HTTP probe failed with statuscode: 404\n  Normal   Killing    25m                  kubelet            Stopping container webserver\n  Warning  Unhealthy  25m                  kubelet            Readiness probe failed: Get \"http://10.240.0.71:80/index.html\": read tcp 10.240.0.65:62103->10.240.0.71:80: wsarecv: An existing connection was forcibly closed by the remote host.\n  Warning  Unhealthy  10m (x862 over 25m)  kubelet            Readiness probe failed: Get \"http://10.240.0.71:80/index.html\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n"
Jul 20 20:24:28.317: INFO: 
Output of kubectl describe ss-2:
Name:                      ss-2
Namespace:                 statefulset-5542
Priority:                  0
Node:                      9338k8s010/10.240.0.65
... skipping 48 lines ...
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  25m                  default-scheduler  Successfully assigned statefulset-5542/ss-2 to 9338k8s010
  Normal   Pulled     25m                  kubelet            Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine
  Normal   Created    25m                  kubelet            Created container webserver
  Normal   Started    25m                  kubelet            Started container webserver
  Warning  Unhealthy  25m (x13 over 25m)   kubelet            Readiness probe failed: HTTP probe failed with statuscode: 404
  Normal   Killing    25m                  kubelet            Stopping container webserver
  Warning  Unhealthy  25m                  kubelet            Readiness probe failed: Get "http://10.240.0.71:80/index.html": read tcp 10.240.0.65:62103->10.240.0.71:80: wsarecv: An existing connection was forcibly closed by the remote host.
  Warning  Unhealthy  10m (x862 over 25m)  kubelet            Readiness probe failed: Get "http://10.240.0.71:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

Jul 20 20:24:28.317: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-n9b0p3f9.westus2.cloudapp.azure.com --kubeconfig=/root/tmp1415318856/kubeconfig/kubeconfig.westus2.json --namespace=statefulset-5542 logs ss-2 --tail=100'
Jul 20 20:24:28.747: INFO: stderr: ""
Jul 20 20:24:28.747: INFO: stdout: ""
Jul 20 20:24:28.747: INFO: 
Last 100 log lines of ss-2:
... skipping 60 lines ...
Jul 20 20:43:49.213: INFO: Waiting for stateful set status.replicas to become 0, currently 1
Jul 20 20:43:59.212: INFO: Waiting for stateful set status.replicas to become 0, currently 1
Jul 20 20:44:09.214: INFO: Waiting for stateful set status.replicas to become 0, currently 1
Jul 20 20:44:19.214: INFO: Waiting for stateful set status.replicas to become 0, currently 1
Jul 20 20:44:29.213: INFO: Waiting for stateful set status.replicas to become 0, currently 1
Jul 20 20:44:29.269: INFO: Waiting for stateful set status.replicas to become 0, currently 1
Jul 20 20:44:29.270: FAIL: Failed waiting for stateful set status.replicas updated to 0: timed out waiting for the condition

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets(0x7971588, 0xc0013889a0, 0xc00408ce30, 0x10)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:86 +0x3cd
k8s.io/kubernetes/test/e2e/apps.glob..func9.2.2()
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:123 +0x145
... skipping 13 lines ...
Jul 20 20:44:29.383: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss-1: { } Scheduled: Successfully assigned statefulset-5542/ss-1 to 9338k8s001
Jul 20 20:44:29.383: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss-2: { } Scheduled: Successfully assigned statefulset-5542/ss-2 to 9338k8s010
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:58:28 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:58:31 +0000 UTC - event for ss-0: {kubelet 9338k8s000} Created: Created container webserver
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:58:31 +0000 UTC - event for ss-0: {kubelet 9338k8s000} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:58:34 +0000 UTC - event for ss-0: {kubelet 9338k8s000} Started: Started container webserver
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:58:35 +0000 UTC - event for ss-0: {kubelet 9338k8s000} Unhealthy: Readiness probe failed: Get "http://10.240.0.111:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:58:40 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-2 in StatefulSet ss successful
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:58:40 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-1 in StatefulSet ss successful
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:58:40 +0000 UTC - event for ss-0: {kubelet 9338k8s000} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 404
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:58:43 +0000 UTC - event for ss-1: {kubelet 9338k8s001} Created: Created container webserver
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:58:43 +0000 UTC - event for ss-1: {kubelet 9338k8s001} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:58:43 +0000 UTC - event for ss-2: {kubelet 9338k8s010} Created: Created container webserver
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:58:43 +0000 UTC - event for ss-2: {kubelet 9338k8s010} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:58:44 +0000 UTC - event for ss-1: {kubelet 9338k8s001} Started: Started container webserver
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:58:45 +0000 UTC - event for ss-2: {kubelet 9338k8s010} Started: Started container webserver
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:58:46 +0000 UTC - event for ss-2: {kubelet 9338k8s010} Unhealthy: Readiness probe failed: Get "http://10.240.0.71:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:58:47 +0000 UTC - event for ss-1: {kubelet 9338k8s001} Unhealthy: Readiness probe failed: Get "http://10.240.0.36:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:58:56 +0000 UTC - event for ss-1: {kubelet 9338k8s001} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 404
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:58:56 +0000 UTC - event for ss-2: {kubelet 9338k8s010} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 404
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:59:07 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:59:07 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-1 in StatefulSet ss successful
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:59:07 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-2 in StatefulSet ss successful
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:59:07 +0000 UTC - event for ss-1: {kubelet 9338k8s001} Killing: Stopping container webserver
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:59:07 +0000 UTC - event for ss-2: {kubelet 9338k8s010} Killing: Stopping container webserver
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:59:12 +0000 UTC - event for ss-1: {kubelet 9338k8s001} Unhealthy: Readiness probe failed: Get "http://10.240.0.36:80/index.html": read tcp 10.240.0.34:36869->10.240.0.36:80: wsarecv: An existing connection was forcibly closed by the remote host.
Jul 20 20:44:29.383: INFO: At 2022-07-20 19:59:12 +0000 UTC - event for ss-2: {kubelet 9338k8s010} Unhealthy: Readiness probe failed: Get "http://10.240.0.71:80/index.html": read tcp 10.240.0.65:62103->10.240.0.71:80: wsarecv: An existing connection was forcibly closed by the remote host.
Jul 20 20:44:29.383: INFO: At 2022-07-20 20:14:12 +0000 UTC - event for ss-2: {kubelet 9338k8s010} FailedKillPod: error killing pod: failed to "KillPodSandbox" for "9616e190-87d5-4372-9b8a-c324a79c2347" with KillPodSandboxError: "rpc error: code = DeadlineExceeded desc = context deadline exceeded"
Jul 20 20:44:29.383: INFO: At 2022-07-20 20:29:13 +0000 UTC - event for ss-2: {kubelet 9338k8s010} FailedKillPod: error killing pod: failed to "KillPodSandbox" for "9616e190-87d5-4372-9b8a-c324a79c2347" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"8f81bc9c2e818c9450393d25200c3d16e19299db11ff3d5368ecbb1e2c52b83c\": Failed to delete endpoint: Failed to remove hcn endpoint: eedc5f05-3655-4e20-a88b-a0b84157e2a7 from namespace: 00000000-0000-0000-0000-000000000000 due to error: hcnOpenNamespace failed in Win32: Element not found. (0x490) {\"Success\":false,\"Error\":\"Element not found. \",\"ErrorCode\":2147943568}"
Jul 20 20:44:29.440: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jul 20 20:44:29.440: INFO: ss-2  9338k8s010  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 19:58:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-07-20 19:58:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-07-20 19:58:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 19:58:40 +0000 UTC  }]
Jul 20 20:44:29.440: INFO: 
Jul 20 20:44:29.498: INFO: 
Logging node info for node 9338k8s000
Jul 20 20:44:29.554: INFO: Node Info: &Node{ObjectMeta:{9338k8s000    cfda53aa-7bcb-4be3-9165-75f07d6301bd 7945 0 2022-07-20 19:32:44 +0000 UTC <nil> <nil> map[agentpool:windowspool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:1 kubernetes.azure.com/cluster:kubetest-n9b0p3f9 kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:9338k8s000 kubernetes.io/os:windows kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D2s_v3 node.kubernetes.io/windows-build:10.0.17763 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:1] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-07-20 19:32:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet.exe Update v1 2022-07-20 19:32:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:agentpool":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.azure.com/cluster":{},"f:kubernetes.azure.com/role":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:storageprofile":{},"f:storagetier":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubectl-label Update v1 2022-07-20 19:32:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/agent":{}}}} } {e2e.test Update v1 2022-07-20 19:57:47 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}} status} {kubelet.exe Update v1 2022-07-20 19:57:54 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-n9b0p3f9/providers/Microsoft.Compute/virtualMachines/9338k8s000,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{8589463552 0} {<nil>} 8388148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{6441979904 0} {<nil>} 6290996Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-07-20 20:43:39 +0000 UTC,LastTransitionTime:2022-07-20 19:32:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-07-20 20:43:39 +0000 UTC,LastTransitionTime:2022-07-20 19:32:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-07-20 20:43:39 +0000 UTC,LastTransitionTime:2022-07-20 19:32:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-07-20 20:43:39 +0000 UTC,LastTransitionTime:2022-07-20 19:32:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:9338k8s000,},NodeAddress{Type:InternalIP,Address:10.240.0.96,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9338k8s000,SystemUUID:88DC7DBD-645A-43BA-A888-AC5BF2F4AEF2,BootID:,KernelVersion:10.0.17763.2300,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.5.8,KubeletVersion:v1.22.13-rc.0.1+bb6a7243193691,KubeProxyVersion:v1.22.13-rc.0.1+bb6a7243193691,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[mcr.microsoft.com/windows/servercore:ltsc2019],SizeBytes:2708606382,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:202103637,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v1.2.1-alpha.1-windows-1809-amd64],SizeBytes:108116897,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.0.1-alpha.1-windows-1809-amd64],SizeBytes:107834550,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/pause:1.4.1],SizeBytes:107267487,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/pause@sha256:565501aebaf23dd687a04ae41511819c854b0447aab91acf70217cc56885ea47 mcr.microsoft.com/oss/kubernetes/pause:3.4.1 mcr.microsoft.com/oss/kubernetes/pause:3.4.1-windows-1809-amd64],SizeBytes:106226157,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/resource-consumer@sha256:0ff025095d8ef518c787fafc6a5769c9e7d67e71c85b14d4ebb9ca211c469314 k8s.gcr.io/e2e-test-images/resource-consumer:1.9],SizeBytes:104289195,},ContainerImage{Names:[mcr.microsoft.com/windows/nanoserver:1809],SizeBytes:102905458,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:102802876,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
... skipping 58 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] [It]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Jul 20 20:24:27.403: Failed waiting for stateful set status.replicas updated to 0: timed out waiting for the condition

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:725
------------------------------
{"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":46,"completed":3,"skipped":373,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 27 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 20 20:44:44.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9845" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":46,"completed":4,"skipped":429,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support 
  works end to end
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/windows/gmsa_full.go:91
[BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow]
... skipping 163 lines ...
Jul 20 20:49:37.903: INFO: The status of Pod retrieve-gmsa-crd-contents is Pending, waiting for it to be Running (with Ready = true)
Jul 20 20:49:39.901: INFO: The status of Pod retrieve-gmsa-crd-contents is Pending, waiting for it to be Running (with Ready = true)
Jul 20 20:49:41.901: INFO: The status of Pod retrieve-gmsa-crd-contents is Pending, waiting for it to be Running (with Ready = true)
Jul 20 20:49:43.900: INFO: The status of Pod retrieve-gmsa-crd-contents is Pending, waiting for it to be Running (with Ready = true)
Jul 20 20:49:45.902: INFO: The status of Pod retrieve-gmsa-crd-contents is Pending, waiting for it to be Running (with Ready = true)
Jul 20 20:49:45.959: INFO: The status of Pod retrieve-gmsa-crd-contents is Pending, waiting for it to be Running (with Ready = true)
Jul 20 20:49:45.959: FAIL: Unexpected error:
    <*errors.errorString | 0xc0002be280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 84 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:27
  GMSA support
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/windows/gmsa_full.go:90
    works end to end [It]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/windows/gmsa_full.go:91

    Jul 20 20:49:45.959: Unexpected error:
        <*errors.errorString | 0xc0002be280>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103
------------------------------
{"msg":"FAILED [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","total":46,"completed":4,"skipped":499,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController 
  Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:60
[BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)
... skipping 515 lines ...
Jul 20 21:16:47.502: INFO: RC rc: sending request to consume 0 MB
Jul 20 21:16:47.502: INFO: ConsumeMem URL: {https   kubetest-n9b0p3f9.westus2.cloudapp.azure.com /api/v1/namespaces/horizontal-pod-autoscaling-4385/services/rc-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Jul 20 21:16:53.532: INFO: RC rc: sending request to consume 0 of custom metric QPS
Jul 20 21:16:53.532: INFO: ConsumeCustomMetric URL: {https   kubetest-n9b0p3f9.westus2.cloudapp.azure.com /api/v1/namespaces/horizontal-pod-autoscaling-4385/services/rc-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Jul 20 21:17:04.896: INFO: waiting for 5 replicas (current: 4)
Jul 20 21:17:04.952: INFO: waiting for 5 replicas (current: 4)
Jul 20 21:17:04.953: FAIL: timeout waiting 15m0s for 5 replicas
Unexpected error:
    <*errors.errorString | 0xc0002be280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 45 lines ...
Jul 20 21:33:28.911: INFO: At 2022-07-20 20:51:14 +0000 UTC - event for rc: {replication-controller } SuccessfulCreate: Created pod: rc-v8qlx
Jul 20 21:33:28.911: INFO: At 2022-07-20 20:51:14 +0000 UTC - event for rc: {replication-controller } SuccessfulCreate: Created pod: rc-5vqwl
Jul 20 21:33:28.911: INFO: At 2022-07-20 20:51:14 +0000 UTC - event for rc: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 3; reason: cpu resource utilization (percentage of request) above target
Jul 20 21:33:28.911: INFO: At 2022-07-20 20:51:16 +0000 UTC - event for rc-5vqwl: {kubelet 9338k8s000} Pulled: Container image "k8s.gcr.io/e2e-test-images/resource-consumer:1.9" already present on machine
Jul 20 21:33:28.911: INFO: At 2022-07-20 20:51:17 +0000 UTC - event for rc-5vqwl: {kubelet 9338k8s000} Created: Created container rc
Jul 20 21:33:28.911: INFO: At 2022-07-20 20:51:18 +0000 UTC - event for rc-5vqwl: {kubelet 9338k8s000} Started: Started container rc
Jul 20 21:33:28.911: INFO: At 2022-07-20 20:51:27 +0000 UTC - event for rc-v8qlx: {kubelet 9338k8s010} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d5b8af12020153b6e727de694c6a0073079a4069c7953e9ec24158b7a7ed4d6b": unexpected end of JSON input
Jul 20 21:33:28.911: INFO: At 2022-07-20 20:51:44 +0000 UTC - event for rc: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 4; reason: cpu resource utilization (percentage of request) above target
Jul 20 21:33:28.911: INFO: At 2022-07-20 20:51:44 +0000 UTC - event for rc: {replication-controller } SuccessfulCreate: Created pod: rc-g2p2x
Jul 20 21:33:28.911: INFO: At 2022-07-20 20:51:47 +0000 UTC - event for rc-g2p2x: {kubelet 9338k8s000} Pulled: Container image "k8s.gcr.io/e2e-test-images/resource-consumer:1.9" already present on machine
Jul 20 21:33:28.911: INFO: At 2022-07-20 20:51:47 +0000 UTC - event for rc-g2p2x: {kubelet 9338k8s000} Created: Created container rc
Jul 20 21:33:28.911: INFO: At 2022-07-20 20:51:49 +0000 UTC - event for rc-g2p2x: {kubelet 9338k8s000} Started: Started container rc
Jul 20 21:33:28.911: INFO: At 2022-07-20 21:00:15 +0000 UTC - event for rc: {replication-controller } SuccessfulDelete: Deleted pod: rc-v8qlx
Jul 20 21:33:28.911: INFO: At 2022-07-20 21:00:15 +0000 UTC - event for rc: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 3; reason: All metrics below target
Jul 20 21:33:28.911: INFO: At 2022-07-20 21:03:01 +0000 UTC - event for rc: {replication-controller } SuccessfulCreate: Created pod: rc-2wsh8
Jul 20 21:33:28.911: INFO: At 2022-07-20 21:03:01 +0000 UTC - event for rc: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 5; reason: cpu resource utilization (percentage of request) above target
Jul 20 21:33:28.911: INFO: At 2022-07-20 21:03:01 +0000 UTC - event for rc: {replication-controller } SuccessfulCreate: Created pod: rc-7xt6b
Jul 20 21:33:28.911: INFO: At 2022-07-20 21:03:03 +0000 UTC - event for rc-7xt6b: {kubelet 9338k8s001} Created: Created container rc
Jul 20 21:33:28.911: INFO: At 2022-07-20 21:03:03 +0000 UTC - event for rc-7xt6b: {kubelet 9338k8s001} Pulled: Container image "k8s.gcr.io/e2e-test-images/resource-consumer:1.9" already present on machine
Jul 20 21:33:28.911: INFO: At 2022-07-20 21:03:05 +0000 UTC - event for rc-7xt6b: {kubelet 9338k8s001} Started: Started container rc
Jul 20 21:33:28.911: INFO: At 2022-07-20 21:03:13 +0000 UTC - event for rc-2wsh8: {kubelet 9338k8s010} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "2aa04cb4d1c7fa509a058ab84ddcb3a947cb17f4c1b998b900db03612706ffa1": unexpected end of JSON input
Jul 20 21:33:28.911: INFO: At 2022-07-20 21:17:15 +0000 UTC - event for rc-4kp6b: {kubelet 9338k8s001} Killing: Stopping container rc
Jul 20 21:33:28.911: INFO: At 2022-07-20 21:17:15 +0000 UTC - event for rc-5vqwl: {kubelet 9338k8s000} Killing: Stopping container rc
Jul 20 21:33:28.911: INFO: At 2022-07-20 21:17:15 +0000 UTC - event for rc-7xt6b: {kubelet 9338k8s001} Killing: Stopping container rc
Jul 20 21:33:28.911: INFO: At 2022-07-20 21:17:15 +0000 UTC - event for rc-g2p2x: {kubelet 9338k8s000} Killing: Stopping container rc
Jul 20 21:33:28.911: INFO: At 2022-07-20 21:33:26 +0000 UTC - event for rc-ctrl-bh6cv: {kubelet 9338k8s000} Killing: Stopping container rc-ctrl
Jul 20 21:33:28.968: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
... skipping 61 lines ...
  [Serial] [Slow] ReplicationController
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:58
    Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability [It]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:60

    Jul 20 21:17:04.953: timeout waiting 15m0s for 5 replicas
    Unexpected error:
        <*errors.errorString | 0xc0002be280>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:128
------------------------------
{"msg":"FAILED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","total":46,"completed":4,"skipped":573,"failed":3,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 20 21:33:30.848: INFO: >>> kubeConfig: /root/tmp1415318856/kubeconfig/kubeconfig.westus2.json
STEP: Building a namespace api object, basename var-expansion
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-8769
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul 20 21:35:31.607: FAIL: while waiting for the pod container to fail
Unexpected error:
    <*errors.errorString | 0xc0002be280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 14 lines ...
Jul 20 21:35:31.669: INFO: Wait up to 5m0s for pod "var-expansion-566cff4a-36cf-4385-b781-4b3b0abaf651" to be fully deleted
[AfterEach] [sig-node] Variable Expansion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "var-expansion-8769".
STEP: Found 2 events.
Jul 20 21:40:31.901: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for var-expansion-566cff4a-36cf-4385-b781-4b3b0abaf651: { } Scheduled: Successfully assigned var-expansion-8769/var-expansion-566cff4a-36cf-4385-b781-4b3b0abaf651 to 9338k8s010
Jul 20 21:40:31.901: INFO: At 2022-07-20 21:33:44 +0000 UTC - event for var-expansion-566cff4a-36cf-4385-b781-4b3b0abaf651: {kubelet 9338k8s010} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a88d8d86d6dc5604c5674494cdd69e8a241b037700484467d6f7ff42a812e278": unexpected end of JSON input
Jul 20 21:40:31.958: INFO: POD                                                 NODE        PHASE    GRACE  CONDITIONS
Jul 20 21:40:31.958: INFO: var-expansion-566cff4a-36cf-4385-b781-4b3b0abaf651  9338k8s010  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:33:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:33:31 +0000 UTC ContainersNotReady containers with unready status: [dapi-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:33:31 +0000 UTC ContainersNotReady containers with unready status: [dapi-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:33:31 +0000 UTC  }]
Jul 20 21:40:31.958: INFO: 
Jul 20 21:40:32.070: INFO: 
Logging node info for node 9338k8s000
Jul 20 21:40:32.127: INFO: Node Info: &Node{ObjectMeta:{9338k8s000    cfda53aa-7bcb-4be3-9165-75f07d6301bd 13106 0 2022-07-20 19:32:44 +0000 UTC <nil> <nil> map[agentpool:windowspool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:1 kubernetes.azure.com/cluster:kubetest-n9b0p3f9 kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:9338k8s000 kubernetes.io/os:windows kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D2s_v3 node.kubernetes.io/windows-build:10.0.17763 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:1] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-07-20 19:32:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet.exe Update v1 2022-07-20 19:32:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:agentpool":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.azure.com/cluster":{},"f:kubernetes.azure.com/role":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:storageprofile":{},"f:storagetier":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubectl-label Update v1 2022-07-20 19:32:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/agent":{}}}} } {e2e.test Update v1 2022-07-20 19:57:47 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}} status} {kubelet.exe Update v1 2022-07-20 19:57:54 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-n9b0p3f9/providers/Microsoft.Compute/virtualMachines/9338k8s000,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{8589463552 0} {<nil>} 8388148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{6441979904 0} {<nil>} 6290996Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},scheduling.k8s.io/foo: {{5 0} {<nil>} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-07-20 21:35:44 +0000 UTC,LastTransitionTime:2022-07-20 19:32:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-07-20 21:35:44 +0000 UTC,LastTransitionTime:2022-07-20 19:32:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-07-20 21:35:44 +0000 UTC,LastTransitionTime:2022-07-20 19:32:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-07-20 21:35:44 +0000 UTC,LastTransitionTime:2022-07-20 19:32:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:9338k8s000,},NodeAddress{Type:InternalIP,Address:10.240.0.96,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9338k8s000,SystemUUID:88DC7DBD-645A-43BA-A888-AC5BF2F4AEF2,BootID:,KernelVersion:10.0.17763.2300,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.5.8,KubeletVersion:v1.22.13-rc.0.1+bb6a7243193691,KubeProxyVersion:v1.22.13-rc.0.1+bb6a7243193691,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[mcr.microsoft.com/windows/servercore:ltsc2019],SizeBytes:2708606382,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:202103637,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v1.2.1-alpha.1-windows-1809-amd64],SizeBytes:108116897,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.0.1-alpha.1-windows-1809-amd64],SizeBytes:107834550,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/pause:1.4.1],SizeBytes:107267487,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/pause@sha256:565501aebaf23dd687a04ae41511819c854b0447aab91acf70217cc56885ea47 mcr.microsoft.com/oss/kubernetes/pause:3.4.1 mcr.microsoft.com/oss/kubernetes/pause:3.4.1-windows-1809-amd64],SizeBytes:106226157,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/resource-consumer@sha256:0ff025095d8ef518c787fafc6a5769c9e7d67e71c85b14d4ebb9ca211c469314 k8s.gcr.io/e2e-test-images/resource-consumer:1.9],SizeBytes:104289195,},ContainerImage{Names:[mcr.microsoft.com/windows/nanoserver:1809],SizeBytes:102905458,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:102802876,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
... skipping 53 lines ...
Jul 20 21:40:33.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8769" for this suite.

• Failure [423.068 seconds]
[sig-node] Variable Expansion
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should fail substituting values in a volume subpath with backticks [Slow] [Conformance] [It]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul 20 21:35:31.607: while waiting for the pod container to fail
  Unexpected error:
      <*errors.errorString | 0xc0002be280>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/expansion.go:379
------------------------------
{"msg":"FAILED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":46,"completed":4,"skipped":793,"failed":4,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 67 lines ...
Jul 20 21:41:00.446: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"13642"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 20 21:41:00.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9741" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":46,"completed":5,"skipped":839,"failed":4,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController 
  Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:63
[BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)
... skipping 23 lines ...
I0720 21:43:11.596950   43227 runners.go:190] rc Pods: 5 out of 5 created, 4 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 21:43:11.712990   43227 runners.go:190] Pod rc-5vqqn	9338k8s000	Running	<nil>
I0720 21:43:11.713065   43227 runners.go:190] Pod rc-ddgsb	9338k8s000	Running	<nil>
I0720 21:43:11.713079   43227 runners.go:190] Pod rc-ffx62	9338k8s001	Running	<nil>
I0720 21:43:11.713091   43227 runners.go:190] Pod rc-s2sws	9338k8s001	Running	<nil>
I0720 21:43:11.713101   43227 runners.go:190] Pod rc-v8749	9338k8s010	Pending	<nil>
Jul 20 21:43:11.713: FAIL: Unexpected error:
    <*errors.errorString | 0xc002bf8610>: {
        s: "only 4 pods started out of 5",
    }
    only 4 pods started out of 5
occurred

... skipping 41 lines ...
Jul 20 21:43:11.772: INFO: At 2022-07-20 21:41:05 +0000 UTC - event for rc-s2sws: {kubelet 9338k8s001} Created: Created container rc
Jul 20 21:43:11.772: INFO: At 2022-07-20 21:41:05 +0000 UTC - event for rc-s2sws: {kubelet 9338k8s001} Pulled: Container image "k8s.gcr.io/e2e-test-images/resource-consumer:1.9" already present on machine
Jul 20 21:43:11.772: INFO: At 2022-07-20 21:41:07 +0000 UTC - event for rc-5vqqn: {kubelet 9338k8s000} Started: Started container rc
Jul 20 21:43:11.772: INFO: At 2022-07-20 21:41:07 +0000 UTC - event for rc-ddgsb: {kubelet 9338k8s000} Started: Started container rc
Jul 20 21:43:11.772: INFO: At 2022-07-20 21:41:08 +0000 UTC - event for rc-ffx62: {kubelet 9338k8s001} Started: Started container rc
Jul 20 21:43:11.772: INFO: At 2022-07-20 21:41:08 +0000 UTC - event for rc-s2sws: {kubelet 9338k8s001} Started: Started container rc
Jul 20 21:43:11.772: INFO: At 2022-07-20 21:41:14 +0000 UTC - event for rc-v8749: {kubelet 9338k8s010} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "2d2ce610e3633afb12586482cae174b435aa8c884f6d07f58462cac0f3db51ec": unexpected end of JSON input
Jul 20 21:43:11.830: INFO: POD       NODE        PHASE    GRACE  CONDITIONS
Jul 20 21:43:11.830: INFO: rc-5vqqn  9338k8s000  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:01 +0000 UTC  }]
Jul 20 21:43:11.830: INFO: rc-ddgsb  9338k8s000  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:01 +0000 UTC  }]
Jul 20 21:43:11.830: INFO: rc-ffx62  9338k8s001  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:01 +0000 UTC  }]
Jul 20 21:43:11.830: INFO: rc-s2sws  9338k8s001  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:01 +0000 UTC  }]
Jul 20 21:43:11.830: INFO: rc-v8749  9338k8s010  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:01 +0000 UTC ContainersNotReady containers with unready status: [rc]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:01 +0000 UTC ContainersNotReady containers with unready status: [rc]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 21:41:01 +0000 UTC  }]
... skipping 72 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/autoscaling/framework.go:23
  [Serial] [Slow] ReplicationController
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:58
    Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability [It]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:63

    Jul 20 21:43:11.713: Unexpected error:
        <*errors.errorString | 0xc002bf8610>: {
            s: "only 4 pods started out of 5",
        }
        only 4 pods started out of 5
    occurred

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/autoscaling/autoscaling_utils.go:460
------------------------------
{"msg":"FAILED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","total":46,"completed":5,"skipped":896,"failed":5,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods 
  latency/resource should be within limit when create 10 pods with 0s interval
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/windows/density.go:66
[BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow]
... skipping 6 lines ...
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in density-test-windows-5972
STEP: Waiting for a default service account to be provisioned in namespace
[It] latency/resource should be within limit when create 10 pods with 0s interval
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/windows/density.go:66
STEP: Creating a batch of pods
STEP: Waiting for all Pods to be observed by the watch...
Jul 20 21:53:14.210: FAIL: Timed out after 600.002s.
Expected
    <bool>: false
to be true

Full Stack Trace
k8s.io/kubernetes/test/e2e/windows.runDensityBatchTest(0xc00247fce0, 0xa, 0x0, 0x702a746, 0x5, 0x0, 0x6fc23ac00, 0xc92a69c00, 0xdbcac8e00, 0x0, ...)
... skipping 139 lines ...
    Expected
        <bool>: false
    to be true

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/windows/density.go:116
------------------------------
{"msg":"FAILED [sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval","total":46,"completed":5,"skipped":1094,"failed":6,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","[sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath 
  runs ReplicaSets to verify preemption running path [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 34 lines ...
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 20 21:54:56.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-1744" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78
•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":46,"completed":6,"skipped":1209,"failed":6,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","[sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 20 lines ...
Jul 20 21:56:34.591: INFO: Noticed Pod "taint-eviction-b2" gets evicted.
STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
[AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 20 21:56:34.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-multiple-pods-7325" for this suite.
•{"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":46,"completed":7,"skipped":1240,"failed":6,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","[sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] NoExecuteTaintManager Single Pod [Serial] 
  removing taint cancels eviction [Disruptive] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 21 lines ...
STEP: Waiting some time to make sure that toleration time passed.
Jul 20 21:58:51.438: INFO: Pod wasn't evicted. Test successful
[AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 20 21:58:51.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-single-pod-9315" for this suite.
•{"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":46,"completed":8,"skipped":1295,"failed":6,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","[sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 39 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 20 21:58:53.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-425" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":46,"completed":9,"skipped":1327,"failed":6,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","[sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 18 lines ...
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 20 21:59:01.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2036" for this suite.
STEP: Destroying namespace "nsdeletetest-2344" for this suite.
Jul 20 21:59:01.867: INFO: Namespace nsdeletetest-2344 was already deleted
STEP: Destroying namespace "nsdeletetest-3158" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":46,"completed":10,"skipped":1359,"failed":6,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","[sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:679
[BeforeEach] [sig-node] Pods
... skipping 34 lines ...
• [SLOW TEST:430.613 seconds]
[sig-node] Pods
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:679
------------------------------
{"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":46,"completed":11,"skipped":1424,"failed":6,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","[sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 912 lines ...
Jul 20 22:11:13.675: INFO: DaemonSet pods can't tolerate node k8s-master-93387234-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jul 20 22:11:13.734: INFO: Number of nodes with available pods: 2
Jul 20 22:11:13.734: INFO: Node 9338k8s010 is running more than one daemon pod
Jul 20 22:11:13.792: INFO: DaemonSet pods can't tolerate node k8s-master-93387234-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jul 20 22:11:13.851: INFO: Number of nodes with available pods: 2
Jul 20 22:11:13.851: INFO: Node 9338k8s010 is running more than one daemon pod
Jul 20 22:11:13.852: FAIL: error waiting for daemon pod to start
Unexpected error:
    <*errors.errorString | 0xc0002be280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 11 lines ...
[AfterEach] [sig-apps] Daemon set [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6064, will wait for the garbage collector to delete the pods
Jul 20 22:11:14.131: INFO: Deleting DaemonSet.extensions daemon-set took: 62.406998ms
Jul 20 22:11:14.232: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.259649ms
Jul 20 22:31:14.232: INFO: ERROR: Pod "daemon-set-9rdp9" still exists. Node: "9338k8s010"
Jul 20 22:31:14.233: FAIL: Unexpected error:
    <*errors.errorString | 0xc001d24310>: {
        s: "error while waiting for pods gone daemon-set: there are 1 pods left. E.g. \"daemon-set-9rdp9\" on node \"9338k8s010\"",
    }
    error while waiting for pods gone daemon-set: there are 1 pods left. E.g. "daemon-set-9rdp9" on node "9338k8s010"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.glob..func3.1()
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:115 +0x407
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000703980)
... skipping 17 lines ...
Jul 20 22:31:14.295: INFO: At 2022-07-20 22:06:16 +0000 UTC - event for daemon-set-kg8wh: {kubelet 9338k8s000} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine
Jul 20 22:31:14.295: INFO: At 2022-07-20 22:06:16 +0000 UTC - event for daemon-set-kg8wh: {kubelet 9338k8s000} Created: Created container app
Jul 20 22:31:14.295: INFO: At 2022-07-20 22:06:17 +0000 UTC - event for daemon-set-kg8wh: {kubelet 9338k8s000} Started: Started container app
Jul 20 22:31:14.295: INFO: At 2022-07-20 22:06:17 +0000 UTC - event for daemon-set-lcvpb: {kubelet 9338k8s001} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine
Jul 20 22:31:14.295: INFO: At 2022-07-20 22:06:18 +0000 UTC - event for daemon-set-lcvpb: {kubelet 9338k8s001} Created: Created container app
Jul 20 22:31:14.295: INFO: At 2022-07-20 22:06:22 +0000 UTC - event for daemon-set-lcvpb: {kubelet 9338k8s001} Started: Started container app
Jul 20 22:31:14.295: INFO: At 2022-07-20 22:06:26 +0000 UTC - event for daemon-set-9rdp9: {kubelet 9338k8s010} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "bc0ce5905c5bff9066c230374daff8b8e3b07c0036c90fbe225b3315a7833a74": unexpected end of JSON input
Jul 20 22:31:14.295: INFO: At 2022-07-20 22:11:14 +0000 UTC - event for daemon-set-kg8wh: {kubelet 9338k8s000} Killing: Stopping container app
Jul 20 22:31:14.295: INFO: At 2022-07-20 22:11:14 +0000 UTC - event for daemon-set-lcvpb: {kubelet 9338k8s001} Killing: Stopping container app
Jul 20 22:31:14.354: INFO: POD               NODE        PHASE    GRACE  CONDITIONS
Jul 20 22:31:14.354: INFO: daemon-set-9rdp9  9338k8s010  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 22:06:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-07-20 22:06:13 +0000 UTC ContainersNotReady containers with unready status: [app]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-07-20 22:06:13 +0000 UTC ContainersNotReady containers with unready status: [app]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 22:06:13 +0000 UTC  }]
Jul 20 22:31:14.354: INFO: 
Jul 20 22:31:14.411: INFO: 
... skipping 58 lines ...
• Failure [1503.725 seconds]
[sig-apps] Daemon set [Serial]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance] [It]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul 20 22:11:13.852: error waiting for daemon pod to start
  Unexpected error:
      <*errors.errorString | 0xc0002be280>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:171
------------------------------
{"msg":"FAILED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":46,"completed":11,"skipped":1473,"failed":7,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","[sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval","[sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should verify changes to a daemon set status [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 912 lines ...
Jul 20 22:36:17.445: INFO: DaemonSet pods can't tolerate node k8s-master-93387234-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jul 20 22:36:17.503: INFO: Number of nodes with available pods: 2
Jul 20 22:36:17.504: INFO: Node 9338k8s010 is running more than one daemon pod
Jul 20 22:36:17.561: INFO: DaemonSet pods can't tolerate node k8s-master-93387234-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jul 20 22:36:17.619: INFO: Number of nodes with available pods: 2
Jul 20 22:36:17.620: INFO: Node 9338k8s010 is running more than one daemon pod
Jul 20 22:36:17.620: FAIL: error waiting for daemon pod to start
Unexpected error:
    <*errors.errorString | 0xc0002be280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 11 lines ...
[AfterEach] [sig-apps] Daemon set [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4607, will wait for the garbage collector to delete the pods
Jul 20 22:36:17.896: INFO: Deleting DaemonSet.extensions daemon-set took: 61.3432ms
Jul 20 22:36:17.997: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.508832ms
Jul 20 22:56:17.998: INFO: ERROR: Pod "daemon-set-bmprb" still exists. Node: "9338k8s010"
Jul 20 22:56:17.998: FAIL: Unexpected error:
    <*errors.errorString | 0xc002cd0280>: {
        s: "error while waiting for pods gone daemon-set: there are 1 pods left. E.g. \"daemon-set-bmprb\" on node \"9338k8s010\"",
    }
    error while waiting for pods gone daemon-set: there are 1 pods left. E.g. "daemon-set-bmprb" on node "9338k8s010"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.glob..func3.1()
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:115 +0x407
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000703980)
... skipping 17 lines ...
Jul 20 22:56:18.058: INFO: At 2022-07-20 22:31:19 +0000 UTC - event for daemon-set-fsxr2: {kubelet 9338k8s000} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine
Jul 20 22:56:18.058: INFO: At 2022-07-20 22:31:19 +0000 UTC - event for daemon-set-fsxr2: {kubelet 9338k8s000} Created: Created container app
Jul 20 22:56:18.058: INFO: At 2022-07-20 22:31:19 +0000 UTC - event for daemon-set-m8nmq: {kubelet 9338k8s001} Created: Created container app
Jul 20 22:56:18.058: INFO: At 2022-07-20 22:31:19 +0000 UTC - event for daemon-set-m8nmq: {kubelet 9338k8s001} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine
Jul 20 22:56:18.058: INFO: At 2022-07-20 22:31:21 +0000 UTC - event for daemon-set-fsxr2: {kubelet 9338k8s000} Started: Started container app
Jul 20 22:56:18.058: INFO: At 2022-07-20 22:31:21 +0000 UTC - event for daemon-set-m8nmq: {kubelet 9338k8s001} Started: Started container app
Jul 20 22:56:18.058: INFO: At 2022-07-20 22:31:30 +0000 UTC - event for daemon-set-bmprb: {kubelet 9338k8s010} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c8b3d9d687e28b316d50f4ebfbfee6f334280911c43595f65385c6475fab1741": unexpected end of JSON input
Jul 20 22:56:18.058: INFO: At 2022-07-20 22:36:17 +0000 UTC - event for daemon-set-fsxr2: {kubelet 9338k8s000} Killing: Stopping container app
Jul 20 22:56:18.058: INFO: At 2022-07-20 22:36:17 +0000 UTC - event for daemon-set-m8nmq: {kubelet 9338k8s001} Killing: Stopping container app
Jul 20 22:56:18.115: INFO: POD               NODE        PHASE    GRACE  CONDITIONS
Jul 20 22:56:18.115: INFO: daemon-set-bmprb  9338k8s010  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 22:31:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-07-20 22:31:17 +0000 UTC ContainersNotReady containers with unready status: [app]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-07-20 22:31:17 +0000 UTC ContainersNotReady containers with unready status: [app]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 22:31:17 +0000 UTC  }]
Jul 20 22:56:18.115: INFO: 
Jul 20 22:56:18.173: INFO: 
... skipping 58 lines ...
• Failure [1503.695 seconds]
[sig-apps] Daemon set [Serial]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should verify changes to a daemon set status [Conformance] [It]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul 20 22:36:17.620: error waiting for daemon pod to start
  Unexpected error:
      <*errors.errorString | 0xc0002be280>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:881
------------------------------
{"msg":"FAILED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","total":46,"completed":11,"skipped":1663,"failed":8,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","[sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval","[sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","[sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-windows] [Feature:Windows] GMSA Kubelet [Slow] kubelet GMSA support when creating a pod with correct GMSA credential specs 
  passes the credential specs down to the Pod's containers
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/windows/gmsa_kubelet.go:43
[BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow]
... skipping 23 lines ...
Jul 20 22:56:35.892: INFO: stderr: ""
Jul 20 22:56:35.892: INFO: stdout: "contoso.org. (1)\r\nThe command completed successfully\r\n"
[AfterEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 20 22:56:35.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gmsa-kubelet-test-windows-1218" for this suite.
•{"msg":"PASSED [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] kubelet GMSA support when creating a pod with correct GMSA credential specs passes the credential specs down to the Pod's containers","total":46,"completed":12,"skipped":1827,"failed":8,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","[sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval","[sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","[sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints 
  verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 29 lines ...
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 20 22:57:38.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-1585" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78
•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":46,"completed":13,"skipped":1949,"failed":8,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","[sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval","[sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","[sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 35 lines ...
• [SLOW TEST:314.148 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":46,"completed":14,"skipped":2361,"failed":8,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","[sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval","[sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","[sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits 
  should not be exceeded after waiting 2 minutes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/windows/cpu_limits.go:41
[BeforeEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial]
... skipping 162 lines ...
Jul 20 23:07:52.166: INFO: The status of Pod cpulimittest-0a23a9aa-d857-4d9c-919f-28cb67da1e72 is Pending, waiting for it to be Running (with Ready = true)
Jul 20 23:07:54.166: INFO: The status of Pod cpulimittest-0a23a9aa-d857-4d9c-919f-28cb67da1e72 is Pending, waiting for it to be Running (with Ready = true)
Jul 20 23:07:56.166: INFO: The status of Pod cpulimittest-0a23a9aa-d857-4d9c-919f-28cb67da1e72 is Pending, waiting for it to be Running (with Ready = true)
Jul 20 23:07:58.165: INFO: The status of Pod cpulimittest-0a23a9aa-d857-4d9c-919f-28cb67da1e72 is Pending, waiting for it to be Running (with Ready = true)
Jul 20 23:08:00.167: INFO: The status of Pod cpulimittest-0a23a9aa-d857-4d9c-919f-28cb67da1e72 is Pending, waiting for it to be Running (with Ready = true)
Jul 20 23:08:00.226: INFO: The status of Pod cpulimittest-0a23a9aa-d857-4d9c-919f-28cb67da1e72 is Pending, waiting for it to be Running (with Ready = true)
Jul 20 23:08:00.226: FAIL: Unexpected error:
    <*errors.errorString | 0xc0002be280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 3 lines ...
k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateBatch.func1(0xc003b65260, 0xc00508bf50, 0xc002a74140, 0x1, 0x1, 0x0, 0xc00154ac00)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:119 +0x77
created by k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateBatch
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:116 +0xf4
STEP: Waiting 2 minutes
STEP: Ensuring pods are still running
Jul 20 23:10:00.349: FAIL: Expected
    <v1.PodPhase>: Pending
to equal
    <v1.PodPhase>: Running

Full Stack Trace
k8s.io/kubernetes/test/e2e/windows.glob..func1.1.1()
... skipping 12 lines ...
STEP: Found 6 events.
Jul 20 23:10:00.420: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for cpulimittest-0a23a9aa-d857-4d9c-919f-28cb67da1e72: { } Scheduled: Successfully assigned cpu-resources-test-windows-2051/cpulimittest-0a23a9aa-d857-4d9c-919f-28cb67da1e72 to 9338k8s010
Jul 20 23:10:00.420: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for cpulimittest-d858be6f-076c-4a16-8457-b75fc4e5f345: { } Scheduled: Successfully assigned cpu-resources-test-windows-2051/cpulimittest-d858be6f-076c-4a16-8457-b75fc4e5f345 to 9338k8s001
Jul 20 23:10:00.420: INFO: At 2022-07-20 23:02:56 +0000 UTC - event for cpulimittest-d858be6f-076c-4a16-8457-b75fc4e5f345: {kubelet 9338k8s001} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
Jul 20 23:10:00.420: INFO: At 2022-07-20 23:02:56 +0000 UTC - event for cpulimittest-d858be6f-076c-4a16-8457-b75fc4e5f345: {kubelet 9338k8s001} Created: Created container cpulimittest-d858be6f-076c-4a16-8457-b75fc4e5f345
Jul 20 23:10:00.420: INFO: At 2022-07-20 23:02:58 +0000 UTC - event for cpulimittest-d858be6f-076c-4a16-8457-b75fc4e5f345: {kubelet 9338k8s001} Started: Started container cpulimittest-d858be6f-076c-4a16-8457-b75fc4e5f345
Jul 20 23:10:00.420: INFO: At 2022-07-20 23:03:12 +0000 UTC - event for cpulimittest-0a23a9aa-d857-4d9c-919f-28cb67da1e72: {kubelet 9338k8s010} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "40742e4199f0bbbf291e8841d0b58b58b6131a3522eb19a33381b3f6f936121b": unexpected end of JSON input
Jul 20 23:10:00.480: INFO: POD                                                NODE        PHASE    GRACE  CONDITIONS
Jul 20 23:10:00.480: INFO: cpulimittest-0a23a9aa-d857-4d9c-919f-28cb67da1e72  9338k8s010  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 23:03:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-07-20 23:03:00 +0000 UTC ContainersNotReady containers with unready status: [cpulimittest-0a23a9aa-d857-4d9c-919f-28cb67da1e72]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-07-20 23:03:00 +0000 UTC ContainersNotReady containers with unready status: [cpulimittest-0a23a9aa-d857-4d9c-919f-28cb67da1e72]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 23:03:00 +0000 UTC  }]
Jul 20 23:10:00.480: INFO: cpulimittest-d858be6f-076c-4a16-8457-b75fc4e5f345  9338k8s001  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 23:02:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 23:02:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 23:02:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 23:02:53 +0000 UTC  }]
Jul 20 23:10:00.480: INFO: 
Jul 20 23:10:00.595: INFO: 
Logging node info for node 9338k8s000
... skipping 61 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:27
  Container limits
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/windows/cpu_limits.go:40
    should not be exceeded after waiting 2 minutes [It]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/windows/cpu_limits.go:41

    Jul 20 23:08:00.226: Unexpected error:
        <*errors.errorString | 0xc0002be280>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred

    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103
------------------------------
{"msg":"FAILED [sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","total":46,"completed":14,"skipped":2381,"failed":9,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","[sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval","[sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","[sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","[sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should list and delete a collection of DaemonSets [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 912 lines ...
Jul 20 23:15:03.862: INFO: DaemonSet pods can't tolerate node k8s-master-93387234-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jul 20 23:15:03.921: INFO: Number of nodes with available pods: 2
Jul 20 23:15:03.921: INFO: Node 9338k8s010 is running more than one daemon pod
Jul 20 23:15:03.978: INFO: DaemonSet pods can't tolerate node k8s-master-93387234-0 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jul 20 23:15:04.037: INFO: Number of nodes with available pods: 2
Jul 20 23:15:04.037: INFO: Node 9338k8s010 is running more than one daemon pod
Jul 20 23:15:04.037: FAIL: error waiting for daemon pod to start
Unexpected error:
    <*errors.errorString | 0xc0002be280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 11 lines ...
[AfterEach] [sig-apps] Daemon set [Serial]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2963, will wait for the garbage collector to delete the pods
Jul 20 23:15:04.319: INFO: Deleting DaemonSet.extensions daemon-set took: 63.487946ms
Jul 20 23:15:04.420: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.945618ms
Jul 20 23:35:04.421: INFO: ERROR: Pod "daemon-set-gqfzt" still exists. Node: "9338k8s010"
Jul 20 23:35:04.421: FAIL: Unexpected error:
    <*errors.errorString | 0xc0006f69b0>: {
        s: "error while waiting for pods gone daemon-set: there are 1 pods left. E.g. \"daemon-set-gqfzt\" on node \"9338k8s010\"",
    }
    error while waiting for pods gone daemon-set: there are 1 pods left. E.g. "daemon-set-gqfzt" on node "9338k8s010"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.glob..func3.1()
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:115 +0x407
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000703980)
... skipping 17 lines ...
Jul 20 23:35:04.483: INFO: At 2022-07-20 23:10:05 +0000 UTC - event for daemon-set-gg2db: {kubelet 9338k8s000} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine
Jul 20 23:35:04.483: INFO: At 2022-07-20 23:10:06 +0000 UTC - event for daemon-set-gg2db: {kubelet 9338k8s000} Created: Created container app
Jul 20 23:35:04.483: INFO: At 2022-07-20 23:10:07 +0000 UTC - event for daemon-set-lx9l5: {kubelet 9338k8s001} Created: Created container app
Jul 20 23:35:04.483: INFO: At 2022-07-20 23:10:07 +0000 UTC - event for daemon-set-lx9l5: {kubelet 9338k8s001} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine
Jul 20 23:35:04.483: INFO: At 2022-07-20 23:10:08 +0000 UTC - event for daemon-set-gg2db: {kubelet 9338k8s000} Started: Started container app
Jul 20 23:35:04.483: INFO: At 2022-07-20 23:10:09 +0000 UTC - event for daemon-set-lx9l5: {kubelet 9338k8s001} Started: Started container app
Jul 20 23:35:04.483: INFO: At 2022-07-20 23:10:16 +0000 UTC - event for daemon-set-gqfzt: {kubelet 9338k8s010} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "0e78efaf57fdf29120ffb4fe12ac54170c37d39541115db97b619f3ed9cd5159": unexpected end of JSON input
Jul 20 23:35:04.483: INFO: At 2022-07-20 23:15:04 +0000 UTC - event for daemon-set-gg2db: {kubelet 9338k8s000} Killing: Stopping container app
Jul 20 23:35:04.483: INFO: At 2022-07-20 23:15:04 +0000 UTC - event for daemon-set-lx9l5: {kubelet 9338k8s001} Killing: Stopping container app
Jul 20 23:35:04.541: INFO: POD               NODE        PHASE    GRACE  CONDITIONS
Jul 20 23:35:04.541: INFO: daemon-set-gqfzt  9338k8s010  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 23:10:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-07-20 23:10:03 +0000 UTC ContainersNotReady containers with unready status: [app]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-07-20 23:10:03 +0000 UTC ContainersNotReady containers with unready status: [app]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-07-20 23:10:03 +0000 UTC  }]
Jul 20 23:35:04.541: INFO: 
Jul 20 23:35:04.655: INFO: 
... skipping 58 lines ...
• Failure [1503.845 seconds]
[sig-apps] Daemon set [Serial]
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should list and delete a collection of DaemonSets [Conformance] [It]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul 20 23:15:04.037: error waiting for daemon pod to start
  Unexpected error:
      <*errors.errorString | 0xc0002be280>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:833
------------------------------
{"msg":"FAILED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","total":46,"completed":14,"skipped":2431,"failed":10,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","[sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval","[sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","[sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","[sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","[sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light 
  Should scale from 2 pods to 1 pod [Slow]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:81
[BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)
... skipping 123 lines ...
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/autoscaling/framework.go:23
  ReplicationController light
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:68
    Should scale from 2 pods to 1 pod [Slow]
    /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:81
------------------------------
{"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 2 pods to 1 pod [Slow]","total":46,"completed":15,"skipped":2574,"failed":10,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","[sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval","[sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","[sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","[sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","[sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:720
[BeforeEach] [sig-node] Pods
... skipping 25 lines ...
• [SLOW TEST:1645.804 seconds]
[sig-node] Pods
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:720
------------------------------
{"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":46,"completed":16,"skipped":2664,"failed":10,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","[sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval","[sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","[sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","[sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","[sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 21 lines ...
Jul 21 00:09:46.589: INFO: Deleting ReplicationController wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc took: 64.675144ms
Jul 21 00:09:46.790: INFO: Terminating ReplicationController wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc pods took: 200.746874ms
STEP: Creating RC which spawns configmap-volume pods
Jul 21 00:09:52.838: INFO: Pod name wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b: Found 1 pods out of 5
Jul 21 00:09:58.016: INFO: Pod name wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b: Found 5 pods out of 5
STEP: Ensuring each pod is running
Jul 21 00:14:58.246: FAIL: Failed waiting for pod wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-g49bv to enter running state
Unexpected error:
    <*errors.errorString | 0xc0002be280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 10 lines ...
	/usr/local/go/src/testing/testing.go:1203 +0xe5
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1248 +0x2b3
STEP: deleting ReplicationController wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b in namespace emptydir-wrapper-2364, will wait for the garbage collector to delete the pods
Jul 21 00:14:58.515: INFO: Deleting ReplicationController wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b took: 60.113316ms
Jul 21 00:14:58.616: INFO: Terminating ReplicationController wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b pods took: 100.981915ms
{"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 5h0m0s timeout","severity":"error","time":"2022-07-21T00:22:08Z"}
++ early_exit_handler
++ '[' -n 179 ']'
++ kill -TERM 179
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 9 lines ...
================================================================================
Cleaning up after docker
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
Jul 21 00:34:58.616: INFO: ERROR: Pod "wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-n4f9l" still exists. Node: "9338k8s010"
Jul 21 00:34:58.616: INFO: ERROR: Pod "wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-t2fwl" still exists. Node: "9338k8s010"
Jul 21 00:34:58.616: INFO: ERROR: Pod "wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-pcf2k" still exists. Node: "9338k8s010"
Jul 21 00:34:58.616: INFO: ERROR: Pod "wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-ztmz5" still exists. Node: "9338k8s010"
Jul 21 00:34:58.616: INFO: ERROR: Pod "wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-g49bv" still exists. Node: "9338k8s010"
Jul 21 00:34:58.617: FAIL: Unexpected error:
    <*errors.errorString | 0xc001c44330>: {
        s: "error while waiting for pods gone wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b: there are 5 pods left. E.g. \"wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-n4f9l\" on node \"9338k8s010\"",
    }
    error while waiting for pods gone wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b: there are 5 pods left. E.g. "wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-n4f9l" on node "9338k8s010"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage.testNoWrappedVolumeRace.func1(0xc0014d0160, 0xc0038f6080, 0x38)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/empty_dir_wrapper.go:390 +0xb8
panic(0x6bbe4c0, 0xc002b06cc0)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc000b868c0, 0x128, 0x893857b, 0x70, 0x195, 0xc002c6ce00, 0x346)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x62ef260, 0x7795600)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc000b868c0, 0x128, 0xc0007d8940, 0x1, 0x1)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Fail(0xc002ea6240, 0x113, 0xc003b65390, 0x1, 0x1)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:62 +0x1ea
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc0007d8ac8, 0x78caad8, 0xa077fc8, 0x0, 0xc0007d8c68, 0x2, 0x2, 0x78de3c8)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:79 +0x216
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc0007d8ac8, 0x78caad8, 0xa077fc8, 0xc0007d8c68, 0x2, 0x2, 0xc00008d400)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7
k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x78610e0, 0xc0002be280, 0xc0007d8c68, 0x2, 0x2)
... skipping 34 lines ...
Jul 21 00:35:01.736: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-zcjnn: { } Scheduled: Successfully assigned emptydir-wrapper-2364/wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-zcjnn to 9338k8s001
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:37 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba: {replication-controller } SuccessfulCreate: Created pod: wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-zcjnn
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:37 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba: {replication-controller } SuccessfulCreate: Created pod: wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-wt5j2
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:37 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba: {replication-controller } SuccessfulCreate: Created pod: wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-ldbc2
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:37 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba: {replication-controller } SuccessfulCreate: Created pod: wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-8cntr
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:37 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba: {replication-controller } SuccessfulCreate: Created pod: wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-nrn7w
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:38 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-8cntr: {kubelet 9338k8s001} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-15" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:38 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-8cntr: {kubelet 9338k8s001} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-43" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:38 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-8cntr: {kubelet 9338k8s001} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-30" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:38 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-8cntr: {kubelet 9338k8s001} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-31" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:38 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-ldbc2: {kubelet 9338k8s001} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-0" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:38 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-ldbc2: {kubelet 9338k8s001} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-12" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:38 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-ldbc2: {kubelet 9338k8s001} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-27" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:38 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-ldbc2: {kubelet 9338k8s001} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-39" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:38 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-nrn7w: {kubelet 9338k8s001} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-6" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:38 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-nrn7w: {kubelet 9338k8s001} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-19" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:38 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-nrn7w: {kubelet 9338k8s001} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-31" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:38 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-wt5j2: {kubelet 9338k8s001} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-45" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:38 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-wt5j2: {kubelet 9338k8s001} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-37" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:38 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-wt5j2: {kubelet 9338k8s001} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-27" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:38 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-zcjnn: {kubelet 9338k8s001} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-1" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:38 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-zcjnn: {kubelet 9338k8s001} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-43" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:38 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-zcjnn: {kubelet 9338k8s001} FailedMount: (combined from similar events): MountVolume.SetUp failed for volume "racey-configmap-4" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:44 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-ldbc2: {kubelet 9338k8s001} FailedMount: (combined from similar events): MountVolume.SetUp failed for volume "racey-configmap-30" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:45 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-wt5j2: {kubelet 9338k8s001} FailedMount: (combined from similar events): MountVolume.SetUp failed for volume "racey-configmap-44" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:55 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-8cntr: {kubelet 9338k8s001} FailedMount: (combined from similar events): MountVolume.SetUp failed for volume "racey-configmap-34" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:08:55 +0000 UTC - event for wrapped-volume-race-c1f461b6-d1e3-4bbe-a040-e56bada99cba-nrn7w: {kubelet 9338k8s001} FailedMount: (combined from similar events): MountVolume.SetUp failed for volume "racey-configmap-42" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:16 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc: {replication-controller } SuccessfulCreate: Created pod: wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-vbcd7
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:16 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc: {replication-controller } SuccessfulCreate: Created pod: wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-mzzbt
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:16 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc: {replication-controller } SuccessfulCreate: Created pod: wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-mjkbc
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:16 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc: {replication-controller } SuccessfulCreate: Created pod: wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-5w5rl
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:16 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc: {replication-controller } SuccessfulCreate: Created pod: wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-2jzw7
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:17 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-2jzw7: {kubelet 9338k8s000} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-37" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:17 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-2jzw7: {kubelet 9338k8s000} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-13" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:17 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-5w5rl: {kubelet 9338k8s000} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-25" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:17 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-5w5rl: {kubelet 9338k8s000} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-6" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:17 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-5w5rl: {kubelet 9338k8s000} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-11" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:17 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-mjkbc: {kubelet 9338k8s000} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-10" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:17 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-mjkbc: {kubelet 9338k8s000} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-35" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:17 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-mjkbc: {kubelet 9338k8s000} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-41" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:17 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-mjkbc: {kubelet 9338k8s000} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-47" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:17 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-mzzbt: {kubelet 9338k8s000} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-23" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:17 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-mzzbt: {kubelet 9338k8s000} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-13" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:17 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-mzzbt: {kubelet 9338k8s000} FailedMount: (combined from similar events): MountVolume.SetUp failed for volume "racey-configmap-8" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:17 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-vbcd7: {kubelet 9338k8s000} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-24" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:17 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-vbcd7: {kubelet 9338k8s000} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-6" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:17 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-vbcd7: {kubelet 9338k8s000} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-18" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:18 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-vbcd7: {kubelet 9338k8s000} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-9" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.736: INFO: At 2022-07-21 00:09:18 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-vbcd7: {kubelet 9338k8s000} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-35" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:26 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-2jzw7: {kubelet 9338k8s000} FailedMount: (combined from similar events): MountVolume.SetUp failed for volume "racey-configmap-24" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:26 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-5w5rl: {kubelet 9338k8s000} FailedMount: (combined from similar events): MountVolume.SetUp failed for volume "racey-configmap-33" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:28 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-mjkbc: {kubelet 9338k8s000} FailedMount: (combined from similar events): MountVolume.SetUp failed for volume "racey-configmap-45" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:34 +0000 UTC - event for wrapped-volume-race-816edbfe-15a0-41ad-9e16-b24cd64357bc-vbcd7: {kubelet 9338k8s000} FailedMount: (combined from similar events): MountVolume.SetUp failed for volume "racey-configmap-22" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:52 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b: {replication-controller } SuccessfulCreate: Created pod: wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-ztmz5
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:52 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b: {replication-controller } SuccessfulCreate: Created pod: wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-g49bv
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:52 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b: {replication-controller } SuccessfulCreate: Created pod: wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-pcf2k
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:52 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b: {replication-controller } SuccessfulCreate: Created pod: wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-t2fwl
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:52 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b: {replication-controller } SuccessfulCreate: Created pod: wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-n4f9l
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:54 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-g49bv: {kubelet 9338k8s010} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-36" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:54 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-g49bv: {kubelet 9338k8s010} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-43" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:54 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-g49bv: {kubelet 9338k8s010} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-9" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:54 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-g49bv: {kubelet 9338k8s010} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-0" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:54 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-n4f9l: {kubelet 9338k8s010} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-19" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:54 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-n4f9l: {kubelet 9338k8s010} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-43" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:54 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-n4f9l: {kubelet 9338k8s010} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-36" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:54 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-pcf2k: {kubelet 9338k8s010} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-15" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:54 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-pcf2k: {kubelet 9338k8s010} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-32" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:54 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-t2fwl: {kubelet 9338k8s010} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-12" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:54 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-t2fwl: {kubelet 9338k8s010} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-1" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:54 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-t2fwl: {kubelet 9338k8s010} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-2" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:54 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-t2fwl: {kubelet 9338k8s010} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-5" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:54 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-t2fwl: {kubelet 9338k8s010} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-15" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:54 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-ztmz5: {kubelet 9338k8s010} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-36" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:54 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-ztmz5: {kubelet 9338k8s010} FailedMount: MountVolume.SetUp failed for volume "racey-configmap-0" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:55 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-g49bv: {kubelet 9338k8s010} FailedMount: (combined from similar events): MountVolume.SetUp failed for volume "racey-configmap-11" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:09:56 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-n4f9l: {kubelet 9338k8s010} FailedMount: (combined from similar events): MountVolume.SetUp failed for volume "racey-configmap-47" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:10:06 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-pcf2k: {kubelet 9338k8s010} FailedMount: (combined from similar events): MountVolume.SetUp failed for volume "racey-configmap-9" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:10:07 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-t2fwl: {kubelet 9338k8s010} FailedMount: (combined from similar events): MountVolume.SetUp failed for volume "racey-configmap-33" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.737: INFO: At 2022-07-21 00:10:08 +0000 UTC - event for wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-ztmz5: {kubelet 9338k8s010} FailedMount: (combined from similar events): MountVolume.SetUp failed for volume "racey-configmap-7" : failed to sync configmap cache: timed out waiting for the condition
Jul 21 00:35:01.851: INFO: POD                                                             NODE        PHASE    GRACE  CONDITIONS
Jul 21 00:35:01.852: INFO: wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-g49bv  9338k8s010  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC ContainersNotReady containers with unready status: [test-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC ContainersNotReady containers with unready status: [test-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC  }]
Jul 21 00:35:01.852: INFO: wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-n4f9l  9338k8s010  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC ContainersNotReady containers with unready status: [test-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC ContainersNotReady containers with unready status: [test-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC  }]
Jul 21 00:35:01.852: INFO: wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-pcf2k  9338k8s010  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC ContainersNotReady containers with unready status: [test-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC ContainersNotReady containers with unready status: [test-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC  }]
Jul 21 00:35:01.852: INFO: wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-t2fwl  9338k8s010  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC ContainersNotReady containers with unready status: [test-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC ContainersNotReady containers with unready status: [test-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC  }]
Jul 21 00:35:01.852: INFO: wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-ztmz5  9338k8s010  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC ContainersNotReady containers with unready status: [test-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC ContainersNotReady containers with unready status: [test-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-07-21 00:09:52 +0000 UTC  }]
... skipping 68 lines ...
• Failure [1590.131 seconds]
[sig-storage] EmptyDir wrapper volumes
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance] [It]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul 21 00:14:58.246: Failed waiting for pod wrapped-volume-race-16e23b02-c36c-43ce-bdf2-dc7b07e73b2b-g49bv to enter running state
  Unexpected error:
      <*errors.errorString | 0xc0002be280>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/empty_dir_wrapper.go:405
------------------------------
{"msg":"FAILED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":46,"completed":16,"skipped":2787,"failed":11,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","[sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support works end to end","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","[sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","[sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval","[sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","[sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","[sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","[sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","[sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
... skipping 2 lines ...
Jul 21 00:35:03.736: INFO: >>> kubeConfig: /root/tmp1415318856/kubeconfig/kubeconfig.westus2.json
STEP: Building a namespace api object, basename var-expansion
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-7390
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod with failed condition
STEP: updating the pod
Jul 21 00:37:05.094: INFO: Successfully updated pod "var-expansion-bf45b2bb-0ebe-4d91-bfe8-24a6044c2cfc"
STEP: waiting for pod running
{"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:255","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2022-07-21T00:37:08Z"}
{"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:257","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2022-07-21T00:37:08Z"}