Recent runs || View in Spyglass
PR | MartinForReal: [release-1.23]Add support for specifying probe protocol / probe port via annotation per service port |
Result | ABORTED |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 1h0m |
Revision | baa4fe63606c6a8f71d3f0351f1ecf1122b828ea |
Refs |
2825 |
... skipping 123 lines ... https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded /home/prow/go/src/sigs.k8s.io/cloud-provider-azure /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure Image Tag is 3f133d6 Error response from daemon: manifest for capzci.azurecr.io/azure-cloud-controller-manager:3f133d6 not found: manifest unknown: manifest tagged by "3f133d6" is not found Build Linux Azure amd64 cloud controller manager make: Entering directory '/home/prow/go/src/sigs.k8s.io/cloud-provider-azure' make ARCH=amd64 build-ccm-image make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cloud-provider-azure' docker buildx inspect img-builder > /dev/null || docker buildx create --name img-builder --use ERROR: no builder "img-builder" found img-builder # enable qemu for arm64 build # https://github.com/docker/buildx/issues/464#issuecomment-741507760 docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64 Unable to find image 'tonistiigi/binfmt:latest' locally latest: Pulling from tonistiigi/binfmt ... skipping 1343 lines ... certificate.cert-manager.io "selfsigned-cert" deleted # Create secret for AzureClusterIdentity ./hack/create-identity-secret.sh make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[2]: Nothing to be done for 'kubectl'. make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' Error from server (NotFound): secrets "cluster-identity-secret" not found secret/cluster-identity-secret created secret/cluster-identity-secret labeled # Create customized cloud provider configs ./hack/create-custom-cloud-provider-config.sh make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[2]: Nothing to be done for 'kubectl'. ... skipping 222 lines ... +++ [1125 03:46:05] Building go targets for linux/amd64: vendor/github.com/onsi/ginkgo/ginkgo > non-static build: k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo make[1]: Leaving directory '/home/prow/go/src/k8s.io/kubernetes' Conformance test: not doing test setup. I1125 03:46:08.269011 91268 e2e.go:132] Starting e2e run "60dd02c7-85b4-4488-8e4f-75486479adc9" on Ginkgo node 1 {"msg":"Test Suite starting","total":335,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: [1m1669347968[0m - Will randomize all specs Will run [1m335[0m of [1m7052[0m specs Nov 25 03:46:11.155: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig ... skipping 28 lines ... W1125 03:46:12.179754 91268 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward api env vars Nov 25 03:46:12.450: INFO: Waiting up to 5m0s for pod "downward-api-2bdf7231-edbc-49c3-8651-bbe20726ea57" in namespace "downward-api-2101" to be "Succeeded or Failed" Nov 25 03:46:12.503: INFO: Pod "downward-api-2bdf7231-edbc-49c3-8651-bbe20726ea57": Phase="Pending", Reason="", readiness=false. Elapsed: 53.333262ms Nov 25 03:46:14.561: INFO: Pod "downward-api-2bdf7231-edbc-49c3-8651-bbe20726ea57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110819554s Nov 25 03:46:16.618: INFO: Pod "downward-api-2bdf7231-edbc-49c3-8651-bbe20726ea57": Phase="Running", Reason="", readiness=false. Elapsed: 4.167609598s Nov 25 03:46:18.674: INFO: Pod "downward-api-2bdf7231-edbc-49c3-8651-bbe20726ea57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.224226619s [1mSTEP[0m: Saw pod success Nov 25 03:46:18.674: INFO: Pod "downward-api-2bdf7231-edbc-49c3-8651-bbe20726ea57" satisfied condition "Succeeded or Failed" Nov 25 03:46:18.729: INFO: Trying to get logs from node capz-wgj520-md-0-spq5f pod downward-api-2bdf7231-edbc-49c3-8651-bbe20726ea57 container dapi-container: <nil> [1mSTEP[0m: delete the pod Nov 25 03:46:18.862: INFO: Waiting for pod downward-api-2bdf7231-edbc-49c3-8651-bbe20726ea57 to disappear Nov 25 03:46:18.915: INFO: Pod downward-api-2bdf7231-edbc-49c3-8651-bbe20726ea57 no longer exists [AfterEach] [sig-node] Downward API /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:46:18.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-2101" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":335,"completed":1,"skipped":50,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Variable Expansion[0m [1mshould allow substituting values in a container's args [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Variable Expansion ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename var-expansion [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test substitution in container's args Nov 25 03:46:19.477: INFO: Waiting up to 5m0s for pod "var-expansion-f1458ab5-0d5a-4148-8fc1-3cce92a8659d" in namespace "var-expansion-7087" to be "Succeeded or Failed" Nov 25 03:46:19.530: INFO: Pod "var-expansion-f1458ab5-0d5a-4148-8fc1-3cce92a8659d": Phase="Pending", Reason="", readiness=false. Elapsed: 53.358515ms Nov 25 03:46:21.587: INFO: Pod "var-expansion-f1458ab5-0d5a-4148-8fc1-3cce92a8659d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109698688s Nov 25 03:46:23.643: INFO: Pod "var-expansion-f1458ab5-0d5a-4148-8fc1-3cce92a8659d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.166222441s [1mSTEP[0m: Saw pod success Nov 25 03:46:23.643: INFO: Pod "var-expansion-f1458ab5-0d5a-4148-8fc1-3cce92a8659d" satisfied condition "Succeeded or Failed" Nov 25 03:46:23.698: INFO: Trying to get logs from node capz-wgj520-md-0-spq5f pod var-expansion-f1458ab5-0d5a-4148-8fc1-3cce92a8659d container dapi-container: <nil> [1mSTEP[0m: delete the pod Nov 25 03:46:23.817: INFO: Waiting for pod var-expansion-f1458ab5-0d5a-4148-8fc1-3cce92a8659d to disappear Nov 25 03:46:23.870: INFO: Pod var-expansion-f1458ab5-0d5a-4148-8fc1-3cce92a8659d no longer exists [AfterEach] [sig-node] Variable Expansion /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:46:23.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "var-expansion-7087" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":335,"completed":2,"skipped":59,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mProxy server[0m [1mshould support proxy with --port 0 [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 11 lines ... Nov 25 03:46:24.373: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl kubectl --server=https://capz-wgj520-5558bd25.westus2.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-5147 proxy -p 0 --disable-filter' [1mSTEP[0m: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:46:24.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-5147" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":335,"completed":3,"skipped":73,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicaSet[0m [1mshould adopt matching pods on creation and release no longer matching pods [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicaSet ... skipping 21 lines ... Nov 25 03:46:41.535: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 [1mSTEP[0m: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:46:41.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replicaset-4283" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":335,"completed":4,"skipped":97,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Services[0m [1mshould test the lifecycle of an Endpoint [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Services ... skipping 20 lines ... [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:46:42.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "services-5797" for this suite. [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 [32m•[0m{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":335,"completed":5,"skipped":110,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir volumes[0m [1mvolume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] EmptyDir volumes ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test emptydir volume type on tmpfs Nov 25 03:46:43.498: INFO: Waiting up to 5m0s for pod "pod-644b797d-63c7-4e44-95e4-2369a08601a1" in namespace "emptydir-1804" to be "Succeeded or Failed" Nov 25 03:46:43.552: INFO: Pod "pod-644b797d-63c7-4e44-95e4-2369a08601a1": Phase="Pending", Reason="", readiness=false. Elapsed: 53.272856ms Nov 25 03:46:45.610: INFO: Pod "pod-644b797d-63c7-4e44-95e4-2369a08601a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111345105s Nov 25 03:46:47.666: INFO: Pod "pod-644b797d-63c7-4e44-95e4-2369a08601a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167279281s Nov 25 03:46:49.722: INFO: Pod "pod-644b797d-63c7-4e44-95e4-2369a08601a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.223166389s Nov 25 03:46:51.777: INFO: Pod "pod-644b797d-63c7-4e44-95e4-2369a08601a1": Phase="Running", Reason="", readiness=true. Elapsed: 8.278847536s Nov 25 03:46:53.833: INFO: Pod "pod-644b797d-63c7-4e44-95e4-2369a08601a1": Phase="Running", Reason="", readiness=false. Elapsed: 10.334489929s Nov 25 03:46:55.889: INFO: Pod "pod-644b797d-63c7-4e44-95e4-2369a08601a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.390370822s [1mSTEP[0m: Saw pod success Nov 25 03:46:55.889: INFO: Pod "pod-644b797d-63c7-4e44-95e4-2369a08601a1" satisfied condition "Succeeded or Failed" Nov 25 03:46:55.944: INFO: Trying to get logs from node capz-wgj520-md-0-spq5f pod pod-644b797d-63c7-4e44-95e4-2369a08601a1 container test-container: <nil> [1mSTEP[0m: delete the pod Nov 25 03:46:56.065: INFO: Waiting for pod pod-644b797d-63c7-4e44-95e4-2369a08601a1 to disappear Nov 25 03:46:56.118: INFO: Pod pod-644b797d-63c7-4e44-95e4-2369a08601a1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:46:56.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-1804" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":6,"skipped":114,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] KubeletManagedEtcHosts[0m [1mshould test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] KubeletManagedEtcHosts ... skipping 70 lines ... Nov 25 03:47:11.197: INFO: ExecWithOptions: execute(POST https://capz-wgj520-5558bd25.westus2.cloudapp.azure.com:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-7144/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Nov 25 03:47:11.570: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:47:11.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "e2e-kubelet-etc-hosts-7144" for this suite. [32m•[0m{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":7,"skipped":130,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-storage] ConfigMap[0m [1mshould be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] ConfigMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap with name configmap-test-volume-map-e2360a10-2ec9-40d4-a4e4-147a60485926 [1mSTEP[0m: Creating a pod to test consume configMaps Nov 25 03:47:12.183: INFO: Waiting up to 5m0s for pod "pod-configmaps-2cfd7ef2-f9ac-4daa-8edd-b66724ff7548" in namespace "configmap-7683" to be "Succeeded or Failed" Nov 25 03:47:12.236: INFO: Pod "pod-configmaps-2cfd7ef2-f9ac-4daa-8edd-b66724ff7548": Phase="Pending", Reason="", readiness=false. Elapsed: 52.937253ms Nov 25 03:47:14.293: INFO: Pod "pod-configmaps-2cfd7ef2-f9ac-4daa-8edd-b66724ff7548": Phase="Running", Reason="", readiness=true. Elapsed: 2.109997215s Nov 25 03:47:16.350: INFO: Pod "pod-configmaps-2cfd7ef2-f9ac-4daa-8edd-b66724ff7548": Phase="Running", Reason="", readiness=false. Elapsed: 4.166639313s Nov 25 03:47:18.406: INFO: Pod "pod-configmaps-2cfd7ef2-f9ac-4daa-8edd-b66724ff7548": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.22278475s [1mSTEP[0m: Saw pod success Nov 25 03:47:18.406: INFO: Pod "pod-configmaps-2cfd7ef2-f9ac-4daa-8edd-b66724ff7548" satisfied condition "Succeeded or Failed" Nov 25 03:47:18.461: INFO: Trying to get logs from node capz-wgj520-md-0-qgnj5 pod pod-configmaps-2cfd7ef2-f9ac-4daa-8edd-b66724ff7548 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Nov 25 03:47:18.590: INFO: Waiting for pod pod-configmaps-2cfd7ef2-f9ac-4daa-8edd-b66724ff7548 to disappear Nov 25 03:47:18.643: INFO: Pod pod-configmaps-2cfd7ef2-f9ac-4daa-8edd-b66724ff7548 no longer exists [AfterEach] [sig-storage] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:47:18.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-7683" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":335,"completed":8,"skipped":131,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Pods[0m [1mshould delete a collection of pods [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Pods ... skipping 10 lines ... [1mSTEP[0m: Create set of pods Nov 25 03:47:19.199: INFO: created test-pod-1 Nov 25 03:47:19.256: INFO: created test-pod-2 Nov 25 03:47:19.311: INFO: created test-pod-3 [1mSTEP[0m: waiting for all 3 pods to be running Nov 25 03:47:19.311: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-8972' to be running and ready Nov 25 03:47:19.474: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Nov 25 03:47:19.474: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Nov 25 03:47:19.474: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Nov 25 03:47:19.474: INFO: 0 / 3 pods in namespace 'pods-8972' are running and ready (0 seconds elapsed) Nov 25 03:47:19.474: INFO: expected 0 pod replicas in namespace 'pods-8972', 0 are Running and Ready. Nov 25 03:47:19.474: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 03:47:19.474: INFO: test-pod-1 capz-wgj520-md-0-qgnj5 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 03:47:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 03:47:19 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 03:47:19 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 03:47:19 +0000 UTC }] Nov 25 03:47:19.474: INFO: test-pod-2 capz-wgj520-md-0-spq5f Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 03:47:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 03:47:19 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 03:47:19 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 03:47:19 +0000 UTC }] Nov 25 03:47:19.474: INFO: test-pod-3 capz-wgj520-md-0-qgnj5 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 03:47:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 03:47:19 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 03:47:19 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 03:47:19 +0000 UTC }] ... skipping 5 lines ... Nov 25 03:47:22.828: INFO: Pod quantity 3 is different from expected quantity 0 Nov 25 03:47:23.829: INFO: Pod quantity 3 is different from expected quantity 0 [AfterEach] [sig-node] Pods /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:47:24.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-8972" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":335,"completed":9,"skipped":138,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir volumes[0m [1mshould support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] EmptyDir volumes ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test emptydir 0666 on tmpfs Nov 25 03:47:25.382: INFO: Waiting up to 5m0s for pod "pod-c58f4d08-eab9-4871-822e-3f4c92c674e7" in namespace "emptydir-4208" to be "Succeeded or Failed" Nov 25 03:47:25.436: INFO: Pod "pod-c58f4d08-eab9-4871-822e-3f4c92c674e7": Phase="Pending", Reason="", readiness=false. Elapsed: 53.586771ms Nov 25 03:47:27.492: INFO: Pod "pod-c58f4d08-eab9-4871-822e-3f4c92c674e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110172413s Nov 25 03:47:29.549: INFO: Pod "pod-c58f4d08-eab9-4871-822e-3f4c92c674e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.166805294s [1mSTEP[0m: Saw pod success Nov 25 03:47:29.549: INFO: Pod "pod-c58f4d08-eab9-4871-822e-3f4c92c674e7" satisfied condition "Succeeded or Failed" Nov 25 03:47:29.605: INFO: Trying to get logs from node capz-wgj520-md-0-spq5f pod pod-c58f4d08-eab9-4871-822e-3f4c92c674e7 container test-container: <nil> [1mSTEP[0m: delete the pod Nov 25 03:47:29.772: INFO: Waiting for pod pod-c58f4d08-eab9-4871-822e-3f4c92c674e7 to disappear Nov 25 03:47:29.825: INFO: Pod pod-c58f4d08-eab9-4871-822e-3f4c92c674e7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:47:29.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-4208" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":10,"skipped":148,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] ConfigMap[0m [1mshould be immutable if `immutable` field is set [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] ConfigMap ... skipping 6 lines ... [It] should be immutable if `immutable` field is set [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-storage] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:47:30.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-2318" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":335,"completed":11,"skipped":157,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] PrivilegedPod [NodeConformance][0m [1mshould enable privileged commands [LinuxOnly][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49[0m [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] ... skipping 24 lines ... Nov 25 03:47:36.555: INFO: ExecWithOptions: Clientset creation Nov 25 03:47:36.556: INFO: ExecWithOptions: execute(POST https://capz-wgj520-5558bd25.westus2.cloudapp.azure.com:6443/api/v1/namespaces/e2e-privileged-pod-5540/pods/privileged-pod/exec?command=ip&command=link&command=add&command=dummy1&command=type&command=dummy&container=not-privileged-container&container=not-privileged-container&stderr=true&stdout=true %!s(MISSING)) [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:47:37.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "e2e-privileged-pod-5540" for this suite. [32m•[0m{"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":335,"completed":12,"skipped":178,"failed":0} [90m------------------------------[0m [0m[sig-storage] ConfigMap[0m [1mshould be consumable from pods in volume [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] ConfigMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap with name configmap-test-volume-a9030fdf-502f-4227-b6c4-c0ac0f04838a [1mSTEP[0m: Creating a pod to test consume configMaps Nov 25 03:47:37.661: INFO: Waiting up to 5m0s for pod "pod-configmaps-84d34d5c-9f7e-4f18-9331-8adb5b255167" in namespace "configmap-8430" to be "Succeeded or Failed" Nov 25 03:47:37.715: INFO: Pod "pod-configmaps-84d34d5c-9f7e-4f18-9331-8adb5b255167": Phase="Pending", Reason="", readiness=false. Elapsed: 53.667811ms Nov 25 03:47:39.771: INFO: Pod "pod-configmaps-84d34d5c-9f7e-4f18-9331-8adb5b255167": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109556456s Nov 25 03:47:41.827: INFO: Pod "pod-configmaps-84d34d5c-9f7e-4f18-9331-8adb5b255167": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.166205984s [1mSTEP[0m: Saw pod success Nov 25 03:47:41.827: INFO: Pod "pod-configmaps-84d34d5c-9f7e-4f18-9331-8adb5b255167" satisfied condition "Succeeded or Failed" Nov 25 03:47:41.882: INFO: Trying to get logs from node capz-wgj520-md-0-spq5f pod pod-configmaps-84d34d5c-9f7e-4f18-9331-8adb5b255167 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Nov 25 03:47:42.002: INFO: Waiting for pod pod-configmaps-84d34d5c-9f7e-4f18-9331-8adb5b255167 to disappear Nov 25 03:47:42.056: INFO: Pod pod-configmaps-84d34d5c-9f7e-4f18-9331-8adb5b255167 no longer exists [AfterEach] [sig-storage] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:47:42.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-8430" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":335,"completed":13,"skipped":178,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Secrets[0m [1mshould be consumable from pods in env vars [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Secrets ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating secret with name secret-test-447e8ba1-90ff-44cb-bb8d-8753b63a20f5 [1mSTEP[0m: Creating a pod to test consume secrets Nov 25 03:47:42.669: INFO: Waiting up to 5m0s for pod "pod-secrets-04353e6c-7cf7-4f0f-a238-e80e09fa3d33" in namespace "secrets-2048" to be "Succeeded or Failed" Nov 25 03:47:42.722: INFO: Pod "pod-secrets-04353e6c-7cf7-4f0f-a238-e80e09fa3d33": Phase="Pending", Reason="", readiness=false. Elapsed: 53.144317ms Nov 25 03:47:44.778: INFO: Pod "pod-secrets-04353e6c-7cf7-4f0f-a238-e80e09fa3d33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108684246s Nov 25 03:47:46.834: INFO: Pod "pod-secrets-04353e6c-7cf7-4f0f-a238-e80e09fa3d33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.165346015s [1mSTEP[0m: Saw pod success Nov 25 03:47:46.834: INFO: Pod "pod-secrets-04353e6c-7cf7-4f0f-a238-e80e09fa3d33" satisfied condition "Succeeded or Failed" Nov 25 03:47:46.890: INFO: Trying to get logs from node capz-wgj520-md-0-spq5f pod pod-secrets-04353e6c-7cf7-4f0f-a238-e80e09fa3d33 container secret-env-test: <nil> [1mSTEP[0m: delete the pod Nov 25 03:47:47.009: INFO: Waiting for pod pod-secrets-04353e6c-7cf7-4f0f-a238-e80e09fa3d33 to disappear Nov 25 03:47:47.064: INFO: Pod pod-secrets-04353e6c-7cf7-4f0f-a238-e80e09fa3d33 no longer exists [AfterEach] [sig-node] Secrets /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:47:47.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-2048" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":335,"completed":14,"skipped":187,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Subpath[0m [90mAtomic writer volumes[0m [1mshould support subpaths with downward pod [Excluded:WindowsDocker] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Subpath ... skipping 7 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 [1mSTEP[0m: Setting up data [It] should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating pod pod-subpath-test-downwardapi-phkw [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Nov 25 03:47:47.740: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-phkw" in namespace "subpath-8884" to be "Succeeded or Failed" Nov 25 03:47:47.795: INFO: Pod "pod-subpath-test-downwardapi-phkw": Phase="Pending", Reason="", readiness=false. Elapsed: 54.525771ms Nov 25 03:47:49.851: INFO: Pod "pod-subpath-test-downwardapi-phkw": Phase="Running", Reason="", readiness=true. Elapsed: 2.110246078s Nov 25 03:47:51.907: INFO: Pod "pod-subpath-test-downwardapi-phkw": Phase="Running", Reason="", readiness=true. Elapsed: 4.16681312s Nov 25 03:47:53.963: INFO: Pod "pod-subpath-test-downwardapi-phkw": Phase="Running", Reason="", readiness=true. Elapsed: 6.222403399s Nov 25 03:47:56.019: INFO: Pod "pod-subpath-test-downwardapi-phkw": Phase="Running", Reason="", readiness=true. Elapsed: 8.278859443s Nov 25 03:47:58.075: INFO: Pod "pod-subpath-test-downwardapi-phkw": Phase="Running", Reason="", readiness=true. Elapsed: 10.334708275s ... skipping 2 lines ... Nov 25 03:48:04.244: INFO: Pod "pod-subpath-test-downwardapi-phkw": Phase="Running", Reason="", readiness=true. Elapsed: 16.503536571s Nov 25 03:48:06.300: INFO: Pod "pod-subpath-test-downwardapi-phkw": Phase="Running", Reason="", readiness=true. Elapsed: 18.560098515s Nov 25 03:48:08.356: INFO: Pod "pod-subpath-test-downwardapi-phkw": Phase="Running", Reason="", readiness=true. Elapsed: 20.615996608s Nov 25 03:48:10.412: INFO: Pod "pod-subpath-test-downwardapi-phkw": Phase="Running", Reason="", readiness=false. Elapsed: 22.671539743s Nov 25 03:48:12.467: INFO: Pod "pod-subpath-test-downwardapi-phkw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.727239981s [1mSTEP[0m: Saw pod success Nov 25 03:48:12.468: INFO: Pod "pod-subpath-test-downwardapi-phkw" satisfied condition "Succeeded or Failed" Nov 25 03:48:12.523: INFO: Trying to get logs from node capz-wgj520-md-0-spq5f pod pod-subpath-test-downwardapi-phkw container test-container-subpath-downwardapi-phkw: <nil> [1mSTEP[0m: delete the pod Nov 25 03:48:12.644: INFO: Waiting for pod pod-subpath-test-downwardapi-phkw to disappear Nov 25 03:48:12.700: INFO: Pod pod-subpath-test-downwardapi-phkw no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-downwardapi-phkw Nov 25 03:48:12.701: INFO: Deleting pod "pod-subpath-test-downwardapi-phkw" in namespace "subpath-8884" [AfterEach] [sig-storage] Subpath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:48:12.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "subpath-8884" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","total":335,"completed":15,"skipped":194,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mKubectl server-side dry-run[0m [1mshould check if kubectl can dry-run update Pods [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 20 lines ... Nov 25 03:48:25.267: INFO: stderr: "" Nov 25 03:48:25.267: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:48:25.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-7906" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":335,"completed":16,"skipped":212,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mUpdate Demo[0m [1mshould create and stop a replication controller [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 61 lines ... Nov 25 03:48:35.567: INFO: stderr: "" Nov 25 03:48:35.567: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:48:35.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-8146" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":335,"completed":17,"skipped":266,"failed":0} [90m------------------------------[0m [0m[sig-node] Container Runtime[0m [90mblackbox test[0m [0mwhen running a container with a new image[0m [1mshould be able to pull from private registry with secret [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393[0m [BeforeEach] [sig-node] Container Runtime ... skipping 10 lines ... [1mSTEP[0m: check the container status [1mSTEP[0m: delete the container [AfterEach] [sig-node] Container Runtime /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:48:39.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-runtime-7750" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":335,"completed":18,"skipped":266,"failed":0} [90m------------------------------[0m [0m[sig-node] Container Runtime[0m [90mblackbox test[0m [0mon terminated container[0m [1mshould report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Container Runtime ... skipping 13 lines ... Nov 25 03:48:44.490: INFO: Expected: &{} to match Container's Termination Message: -- [1mSTEP[0m: delete the container [AfterEach] [sig-node] Container Runtime /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:48:44.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-runtime-525" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":335,"completed":19,"skipped":266,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Secrets[0m [1mshould be consumable from pods in volume [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Secrets ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating secret with name secret-test-ce39b765-db0f-4a80-9b92-a719ef3eda83 [1mSTEP[0m: Creating a pod to test consume secrets Nov 25 03:48:45.231: INFO: Waiting up to 5m0s for pod "pod-secrets-a3b8b227-4d5b-4a59-a5ee-f6892ebc9a9d" in namespace "secrets-8700" to be "Succeeded or Failed" Nov 25 03:48:45.285: INFO: Pod "pod-secrets-a3b8b227-4d5b-4a59-a5ee-f6892ebc9a9d": Phase="Pending", Reason="", readiness=false. Elapsed: 53.289758ms Nov 25 03:48:47.341: INFO: Pod "pod-secrets-a3b8b227-4d5b-4a59-a5ee-f6892ebc9a9d": Phase="Running", Reason="", readiness=true. Elapsed: 2.109881239s Nov 25 03:48:49.397: INFO: Pod "pod-secrets-a3b8b227-4d5b-4a59-a5ee-f6892ebc9a9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.165606623s [1mSTEP[0m: Saw pod success Nov 25 03:48:49.397: INFO: Pod "pod-secrets-a3b8b227-4d5b-4a59-a5ee-f6892ebc9a9d" satisfied condition "Succeeded or Failed" Nov 25 03:48:49.452: INFO: Trying to get logs from node capz-wgj520-md-0-spq5f pod pod-secrets-a3b8b227-4d5b-4a59-a5ee-f6892ebc9a9d container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Nov 25 03:48:49.571: INFO: Waiting for pod pod-secrets-a3b8b227-4d5b-4a59-a5ee-f6892ebc9a9d to disappear Nov 25 03:48:49.625: INFO: Pod pod-secrets-a3b8b227-4d5b-4a59-a5ee-f6892ebc9a9d no longer exists [AfterEach] [sig-storage] Secrets /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:48:49.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-8700" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":335,"completed":20,"skipped":269,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Security Context[0m [90mWhen creating a pod with readOnlyRootFilesystem[0m [1mshould run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217[0m [BeforeEach] [sig-node] Security Context ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 Nov 25 03:48:50.199: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-5b653eb7-adde-440f-b99a-9eacbbf61ffb" in namespace "security-context-test-4593" to be "Succeeded or Failed" Nov 25 03:48:50.253: INFO: Pod "busybox-readonly-true-5b653eb7-adde-440f-b99a-9eacbbf61ffb": Phase="Pending", Reason="", readiness=false. Elapsed: 53.857567ms Nov 25 03:48:52.310: INFO: Pod "busybox-readonly-true-5b653eb7-adde-440f-b99a-9eacbbf61ffb": Phase="Running", Reason="", readiness=true. Elapsed: 2.111235728s Nov 25 03:48:54.367: INFO: Pod "busybox-readonly-true-5b653eb7-adde-440f-b99a-9eacbbf61ffb": Phase="Running", Reason="", readiness=false. Elapsed: 4.167985803s Nov 25 03:48:56.425: INFO: Pod "busybox-readonly-true-5b653eb7-adde-440f-b99a-9eacbbf61ffb": Phase="Failed", Reason="", readiness=false. Elapsed: 6.225870475s Nov 25 03:48:56.425: INFO: Pod "busybox-readonly-true-5b653eb7-adde-440f-b99a-9eacbbf61ffb" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:48:56.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-4593" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":335,"completed":21,"skipped":276,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] DNS[0m [1mshould provide DNS for ExternalName services [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] DNS ... skipping 27 lines ... [1mSTEP[0m: retrieving the pod [1mSTEP[0m: looking for the results for each expected name from probers Nov 25 03:49:25.788: INFO: File wheezy_udp@dns-test-service-3.dns-991.svc.cluster.local from pod dns-991/dns-test-52a8a24a-64c9-4f34-87a4-fbeb338fccba contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 25 03:49:25.843: INFO: File jessie_udp@dns-test-service-3.dns-991.svc.cluster.local from pod dns-991/dns-test-52a8a24a-64c9-4f34-87a4-fbeb338fccba contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 25 03:49:25.843: INFO: Lookups using dns-991/dns-test-52a8a24a-64c9-4f34-87a4-fbeb338fccba failed for: [wheezy_udp@dns-test-service-3.dns-991.svc.cluster.local jessie_udp@dns-test-service-3.dns-991.svc.cluster.local] Nov 25 03:49:30.955: INFO: DNS probes using dns-test-52a8a24a-64c9-4f34-87a4-fbeb338fccba succeeded [1mSTEP[0m: deleting the pod [1mSTEP[0m: changing the service to type=ClusterIP [1mSTEP[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-991.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-991.svc.cluster.local; sleep 1; done ... skipping 9 lines ... [1mSTEP[0m: deleting the pod [1mSTEP[0m: deleting the test externalName service [AfterEach] [sig-network] DNS /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:49:35.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "dns-991" for this suite. [32m•[0m{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":335,"completed":22,"skipped":312,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mpatching/updating a mutating webhook should work [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 24 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:49:40.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-9053" for this suite. [1mSTEP[0m: Destroying namespace "webhook-9053-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":335,"completed":23,"skipped":319,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] DisruptionController[0m [1mshould observe PodDisruptionBudget status updated [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] DisruptionController ... skipping 11 lines ... [1mSTEP[0m: Waiting for all pods to be running Nov 25 03:49:42.044: INFO: running pods: 0 < 3 [AfterEach] [sig-apps] DisruptionController /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:49:44.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "disruption-2786" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":335,"completed":24,"skipped":328,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin][0m [1mworks for CRD without validation schema [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] ... skipping 24 lines ... Nov 25 03:49:55.392: INFO: stderr: "" Nov 25 03:49:55.392: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9239-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:49:59.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "crd-publish-openapi-5522" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":335,"completed":25,"skipped":353,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] StatefulSet[0m [90mBasic StatefulSet functionality [StatefulSetBasic][0m [1mshould list, patch and delete a collection of StatefulSets [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] StatefulSet ... skipping 24 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Nov 25 03:50:21.500: INFO: Deleting all statefulset in ns statefulset-5633 [AfterEach] [sig-apps] StatefulSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:50:21.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "statefulset-5633" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":335,"completed":26,"skipped":410,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Services[0m [1mshould have session affinity work for service with type clusterIP [LinuxOnly] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Services ... skipping 45 lines ... [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:50:33.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "services-4389" for this suite. [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 [32m•[0m{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":335,"completed":27,"skipped":420,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-node] InitContainer [NodeConformance][0m [1mshould invoke init containers on a RestartAlways pod [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] ... skipping 10 lines ... [1mSTEP[0m: creating the pod Nov 25 03:50:34.443: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:50:37.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "init-container-39" for this suite. [32m•[0m{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":335,"completed":28,"skipped":421,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-auth] ServiceAccounts[0m [1mServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-auth] ServiceAccounts ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename svcaccounts [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Nov 25 03:50:38.433: INFO: created pod Nov 25 03:50:38.433: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-124" to be "Succeeded or Failed" Nov 25 03:50:38.488: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 54.385867ms Nov 25 03:50:40.545: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 2.111887344s Nov 25 03:50:42.604: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.170339341s [1mSTEP[0m: Saw pod success Nov 25 03:50:42.604: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Nov 25 03:51:12.607: INFO: polling logs Nov 25 03:51:12.670: INFO: Pod logs: I1125 03:50:39.197257 1 log.go:195] OK: Got token I1125 03:50:39.197290 1 log.go:195] validating with in-cluster discovery I1125 03:50:39.197810 1 log.go:195] OK: got issuer https://kubernetes.default.svc.cluster.local I1125 03:50:39.197838 1 log.go:195] Full, not-validated claims: ... skipping 6 lines ... Nov 25 03:51:12.670: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:51:12.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "svcaccounts-124" for this suite. [32m•[0m{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":335,"completed":29,"skipped":438,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicaSet[0m [1mReplicaset should have a working scale subresource [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicaSet ... skipping 13 lines ... [1mSTEP[0m: verifying the replicaset Spec.Replicas was modified [1mSTEP[0m: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:51:15.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replicaset-3691" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":335,"completed":30,"skipped":451,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected configMap[0m [1mshould be consumable from pods in volume with mappings [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected configMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-20c55c53-a8ff-4029-b4f5-5ce6ec704b05 [1mSTEP[0m: Creating a pod to test consume configMaps Nov 25 03:51:16.516: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-59899e1d-0d0c-45c1-ac82-20c3189f1d9a" in namespace "projected-5143" to be "Succeeded or Failed" Nov 25 03:51:16.573: INFO: Pod "pod-projected-configmaps-59899e1d-0d0c-45c1-ac82-20c3189f1d9a": Phase="Pending", Reason="", readiness=false. Elapsed: 56.35785ms Nov 25 03:51:18.630: INFO: Pod "pod-projected-configmaps-59899e1d-0d0c-45c1-ac82-20c3189f1d9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113234435s Nov 25 03:51:20.688: INFO: Pod "pod-projected-configmaps-59899e1d-0d0c-45c1-ac82-20c3189f1d9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.171004182s [1mSTEP[0m: Saw pod success Nov 25 03:51:20.688: INFO: Pod "pod-projected-configmaps-59899e1d-0d0c-45c1-ac82-20c3189f1d9a" satisfied condition "Succeeded or Failed" Nov 25 03:51:20.744: INFO: Trying to get logs from node capz-wgj520-md-0-qgnj5 pod pod-projected-configmaps-59899e1d-0d0c-45c1-ac82-20c3189f1d9a container agnhost-container: <nil> [1mSTEP[0m: delete the pod Nov 25 03:51:20.867: INFO: Waiting for pod pod-projected-configmaps-59899e1d-0d0c-45c1-ac82-20c3189f1d9a to disappear Nov 25 03:51:20.923: INFO: Pod pod-projected-configmaps-59899e1d-0d0c-45c1-ac82-20c3189f1d9a no longer exists [AfterEach] [sig-storage] Projected configMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:51:20.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-5143" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":335,"completed":31,"skipped":453,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicaSet[0m [1mshould list and delete a collection of ReplicaSets [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicaSet ... skipping 14 lines ... [1mSTEP[0m: DeleteCollection of the ReplicaSets [1mSTEP[0m: After DeleteCollection verify that ReplicaSets have been deleted [AfterEach] [sig-apps] ReplicaSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:51:23.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replicaset-5068" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":335,"completed":32,"skipped":478,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Sysctls [LinuxOnly] [NodeConformance][0m [1mshould support sysctls [MinimumKubeletVersion:1.21] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] ... skipping 7 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod with the kernel.shm_rmid_forced sysctl [1mSTEP[0m: Watching for error events or started pod [1mSTEP[0m: Waiting for pod completion [1mSTEP[0m: Checking that the pod succeeded [1mSTEP[0m: Getting logs from the pod [1mSTEP[0m: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:51:28.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "sysctl-7664" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":335,"completed":33,"skipped":487,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Container Runtime[0m [90mblackbox test[0m [0mwhen running a container with a new image[0m [1mshould not be able to pull image from invalid registry [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377[0m [BeforeEach] [sig-node] Container Runtime ... skipping 9 lines ... [1mSTEP[0m: check the container status [1mSTEP[0m: delete the container [AfterEach] [sig-node] Container Runtime /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:51:31.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-runtime-2228" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":335,"completed":34,"skipped":534,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-node] Pods[0m [1mshould support pod readiness gates [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:775[0m [BeforeEach] [sig-node] Pods ... skipping 12 lines ... [1mSTEP[0m: patching pod status with condition "k8s.io/test-condition2" to true [1mSTEP[0m: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [sig-node] Pods /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:51:46.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-4631" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeConformance]","total":335,"completed":35,"skipped":535,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicaSet[0m [1mshould serve a basic image on each replica with a public image [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicaSet ... skipping 12 lines ... Nov 25 03:51:49.545: INFO: Trying to dial the pod Nov 25 03:51:54.717: INFO: Controller my-hostname-basic-9cae71b0-4202-46c6-9682-1a348d9a26de: Got expected result from replica 1 [my-hostname-basic-9cae71b0-4202-46c6-9682-1a348d9a26de-54tt9]: "my-hostname-basic-9cae71b0-4202-46c6-9682-1a348d9a26de-54tt9", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:51:54.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replicaset-8105" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":335,"completed":36,"skipped":563,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] ResourceQuota[0m [1mshould create a ResourceQuota and capture the life of a configMap. [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] ResourceQuota ... skipping 13 lines ... [1mSTEP[0m: Deleting a ConfigMap [1mSTEP[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:52:23.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "resourcequota-7627" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":335,"completed":37,"skipped":578,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] HostPath[0m [1mshould give a volume the correct mode [LinuxOnly] [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48[0m [BeforeEach] [sig-storage] HostPath ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 [1mSTEP[0m: Creating a pod to test hostPath mode Nov 25 03:52:24.278: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-33" to be "Succeeded or Failed" Nov 25 03:52:24.334: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 56.159861ms Nov 25 03:52:26.392: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113760465s Nov 25 03:52:28.450: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.172143466s [1mSTEP[0m: Saw pod success Nov 25 03:52:28.450: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Nov 25 03:52:28.507: INFO: Trying to get logs from node capz-wgj520-md-0-spq5f pod pod-host-path-test container test-container-1: <nil> [1mSTEP[0m: delete the pod Nov 25 03:52:28.696: INFO: Waiting for pod pod-host-path-test to disappear Nov 25 03:52:28.749: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:52:28.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "hostpath-33" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":335,"completed":38,"skipped":582,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] CronJob[0m [1mshould schedule multiple jobs concurrently [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] CronJob ... skipping 10 lines ... [1mSTEP[0m: Ensuring at least two running jobs exists by listing jobs explicitly [1mSTEP[0m: Removing cronjob [AfterEach] [sig-apps] CronJob /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:54:01.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "cronjob-9291" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":335,"completed":39,"skipped":610,"failed":0} [90m------------------------------[0m [0m[sig-storage] Projected combined[0m [1mshould project all components that make up the projection API [Projection][NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected combined ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap with name configmap-projected-all-test-volume-db700b5e-6ca6-47c5-850b-f41acb8c46db [1mSTEP[0m: Creating secret with name secret-projected-all-test-volume-5fbea268-e437-4d00-ada6-9ec1bf97e557 [1mSTEP[0m: Creating a pod to test Check all projections for projected volume plugin Nov 25 03:54:02.165: INFO: Waiting up to 5m0s for pod "projected-volume-939bedb1-93ba-48fd-bc60-b8d72da0ec44" in namespace "projected-2329" to be "Succeeded or Failed" Nov 25 03:54:02.219: INFO: Pod "projected-volume-939bedb1-93ba-48fd-bc60-b8d72da0ec44": Phase="Pending", Reason="", readiness=false. Elapsed: 53.436662ms Nov 25 03:54:04.276: INFO: Pod "projected-volume-939bedb1-93ba-48fd-bc60-b8d72da0ec44": Phase="Running", Reason="", readiness=true. Elapsed: 2.110525136s Nov 25 03:54:06.332: INFO: Pod "projected-volume-939bedb1-93ba-48fd-bc60-b8d72da0ec44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.166955832s [1mSTEP[0m: Saw pod success Nov 25 03:54:06.332: INFO: Pod "projected-volume-939bedb1-93ba-48fd-bc60-b8d72da0ec44" satisfied condition "Succeeded or Failed" Nov 25 03:54:06.389: INFO: Trying to get logs from node capz-wgj520-md-0-spq5f pod projected-volume-939bedb1-93ba-48fd-bc60-b8d72da0ec44 container projected-all-volume-test: <nil> [1mSTEP[0m: delete the pod Nov 25 03:54:06.515: INFO: Waiting for pod projected-volume-939bedb1-93ba-48fd-bc60-b8d72da0ec44 to disappear Nov 25 03:54:06.569: INFO: Pod projected-volume-939bedb1-93ba-48fd-bc60-b8d72da0ec44 no longer exists [AfterEach] [sig-storage] Projected combined /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:54:06.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-2329" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":335,"completed":40,"skipped":610,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Probing container[0m [1mshould be restarted with a /healthz http liveness probe [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Probing container ... skipping 14 lines ... Nov 25 03:54:29.939: INFO: Restart count of pod container-probe-5262/liveness-f4ea9129-846b-43c6-b101-7658bfa9b453 is now 1 (20.629338764s elapsed) [1mSTEP[0m: deleting the pod [AfterEach] [sig-node] Probing container /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:54:30.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-probe-5262" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":335,"completed":41,"skipped":625,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Security Context[0m [1mshould support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Security Context ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Nov 25 03:54:30.571: INFO: Waiting up to 5m0s for pod "security-context-f32ed8f0-dca7-462e-b64d-188943efdc00" in namespace "security-context-6585" to be "Succeeded or Failed" Nov 25 03:54:30.627: INFO: Pod "security-context-f32ed8f0-dca7-462e-b64d-188943efdc00": Phase="Pending", Reason="", readiness=false. Elapsed: 55.574621ms Nov 25 03:54:32.690: INFO: Pod "security-context-f32ed8f0-dca7-462e-b64d-188943efdc00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11880848s Nov 25 03:54:34.747: INFO: Pod "security-context-f32ed8f0-dca7-462e-b64d-188943efdc00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.175360149s [1mSTEP[0m: Saw pod success Nov 25 03:54:34.747: INFO: Pod "security-context-f32ed8f0-dca7-462e-b64d-188943efdc00" satisfied condition "Succeeded or Failed" Nov 25 03:54:34.803: INFO: Trying to get logs from node capz-wgj520-md-0-qgnj5 pod security-context-f32ed8f0-dca7-462e-b64d-188943efdc00 container test-container: <nil> [1mSTEP[0m: delete the pod Nov 25 03:54:34.933: INFO: Waiting for pod security-context-f32ed8f0-dca7-462e-b64d-188943efdc00 to disappear Nov 25 03:54:34.987: INFO: Pod security-context-f32ed8f0-dca7-462e-b64d-188943efdc00 no longer exists [AfterEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:54:34.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-6585" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":335,"completed":42,"skipped":661,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 35 lines ... For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:54:36.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-4468" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":335,"completed":43,"skipped":708,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mProxy server[0m [1mshould support --unix-socket=/path [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 11 lines ... Nov 25 03:54:37.293: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl kubectl --server=https://capz-wgj520-5558bd25.westus2.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-5319 proxy --unix-socket=/tmp/kubectl-proxy-unix1489732131/test' [1mSTEP[0m: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:54:37.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-5319" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":335,"completed":44,"skipped":780,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-instrumentation] Events API[0m [1mshould ensure that an event can be fetched, patched, deleted, and listed [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-instrumentation] Events API ... skipping 21 lines ... [1mSTEP[0m: listing events in all namespaces [1mSTEP[0m: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:54:38.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "events-5686" for this suite. [32m•[0m{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":335,"completed":45,"skipped":784,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Services[0m [1mshould have session affinity work for NodePort service [LinuxOnly] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Services ... skipping 51 lines ... [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:54:52.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "services-3362" for this suite. [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 [32m•[0m{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":335,"completed":46,"skipped":790,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-instrumentation] Events[0m [1mshould ensure that an event can be fetched, patched, deleted, and listed [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-instrumentation] Events ... skipping 12 lines ... [1mSTEP[0m: deleting the test event [1mSTEP[0m: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:54:53.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "events-4678" for this suite. [32m•[0m{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":335,"completed":47,"skipped":810,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mshould mutate custom resource [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 22 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:55:00.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-1220" for this suite. [1mSTEP[0m: Destroying namespace "webhook-1220-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":335,"completed":48,"skipped":895,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-apps] CronJob[0m [1mshould support CronJob API operations [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] CronJob ... skipping 24 lines ... [1mSTEP[0m: deleting [1mSTEP[0m: deleting a collection [AfterEach] [sig-apps] CronJob /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:55:02.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "cronjob-2546" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":335,"completed":49,"skipped":896,"failed":0} [90m------------------------------[0m [0m[sig-storage] Downward API volume[0m [1mshould update labels on modification [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Downward API volume ... skipping 12 lines ... Nov 25 03:55:04.903: INFO: The status of Pod labelsupdate2faeb0f9-260e-4289-8efe-ef945e091bae is Running (Ready = true) Nov 25 03:55:05.652: INFO: Successfully updated pod "labelsupdate2faeb0f9-260e-4289-8efe-ef945e091bae" [AfterEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:55:07.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-2283" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":335,"completed":50,"skipped":896,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mKubectl replace[0m [1mshould update a single-container pod's image [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 29 lines ... Nov 25 03:55:18.186: INFO: stderr: "" Nov 25 03:55:18.186: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:55:18.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-2121" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":335,"completed":51,"skipped":910,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-instrumentation] Events[0m [1mshould delete a collection of events [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-instrumentation] Events ... skipping 15 lines ... [1mSTEP[0m: check that the list of events matches the requested quantity Nov 25 03:55:19.018: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:55:19.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "events-2377" for this suite. [32m•[0m{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":335,"completed":52,"skipped":913,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Docker Containers[0m [1mshould be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Docker Containers ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename containers [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test override arguments Nov 25 03:55:19.629: INFO: Waiting up to 5m0s for pod "client-containers-f3184696-2fce-434c-b110-88fe1f7cd3ab" in namespace "containers-8029" to be "Succeeded or Failed" Nov 25 03:55:19.684: INFO: Pod "client-containers-f3184696-2fce-434c-b110-88fe1f7cd3ab": Phase="Pending", Reason="", readiness=false. Elapsed: 54.444735ms Nov 25 03:55:21.739: INFO: Pod "client-containers-f3184696-2fce-434c-b110-88fe1f7cd3ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109399311s Nov 25 03:55:23.794: INFO: Pod "client-containers-f3184696-2fce-434c-b110-88fe1f7cd3ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.164331943s [1mSTEP[0m: Saw pod success Nov 25 03:55:23.794: INFO: Pod "client-containers-f3184696-2fce-434c-b110-88fe1f7cd3ab" satisfied condition "Succeeded or Failed" Nov 25 03:55:23.849: INFO: Trying to get logs from node capz-wgj520-md-0-qgnj5 pod client-containers-f3184696-2fce-434c-b110-88fe1f7cd3ab container agnhost-container: <nil> [1mSTEP[0m: delete the pod Nov 25 03:55:23.968: INFO: Waiting for pod client-containers-f3184696-2fce-434c-b110-88fe1f7cd3ab to disappear Nov 25 03:55:24.022: INFO: Pod client-containers-f3184696-2fce-434c-b110-88fe1f7cd3ab no longer exists [AfterEach] [sig-node] Docker Containers /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:55:24.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "containers-8029" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":335,"completed":53,"skipped":923,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicationController[0m [1mshould adopt matching pods on creation [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicationController ... skipping 13 lines ... [1mSTEP[0m: When a replication controller with a matching selector is created [1mSTEP[0m: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:55:26.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replication-controller-5914" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":335,"completed":54,"skipped":964,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected downwardAPI[0m [1mshould provide podname only [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected downwardAPI ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin Nov 25 03:55:27.421: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ffb6c9c-f891-426d-b86a-1cf004831f71" in namespace "projected-5157" to be "Succeeded or Failed" Nov 25 03:55:27.475: INFO: Pod "downwardapi-volume-9ffb6c9c-f891-426d-b86a-1cf004831f71": Phase="Pending", Reason="", readiness=false. Elapsed: 53.383229ms Nov 25 03:55:29.530: INFO: Pod "downwardapi-volume-9ffb6c9c-f891-426d-b86a-1cf004831f71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108931572s Nov 25 03:55:31.585: INFO: Pod "downwardapi-volume-9ffb6c9c-f891-426d-b86a-1cf004831f71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.163832838s [1mSTEP[0m: Saw pod success Nov 25 03:55:31.585: INFO: Pod "downwardapi-volume-9ffb6c9c-f891-426d-b86a-1cf004831f71" satisfied condition "Succeeded or Failed" Nov 25 03:55:31.639: INFO: Trying to get logs from node capz-wgj520-md-0-qgnj5 pod downwardapi-volume-9ffb6c9c-f891-426d-b86a-1cf004831f71 container client-container: <nil> [1mSTEP[0m: delete the pod Nov 25 03:55:31.756: INFO: Waiting for pod downwardapi-volume-9ffb6c9c-f891-426d-b86a-1cf004831f71 to disappear Nov 25 03:55:31.810: INFO: Pod downwardapi-volume-9ffb6c9c-f891-426d-b86a-1cf004831f71 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:55:31.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-5157" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":335,"completed":55,"skipped":984,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mKubectl describe[0m [1mshould check if kubectl describe prints relevant information for rc and pods [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 21 lines ... Nov 25 03:55:34.399: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Nov 25 03:55:34.399: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://capz-wgj520-5558bd25.westus2.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-9770 describe pod agnhost-primary-tccpk' Nov 25 03:55:34.787: INFO: stderr: "" Nov 25 03:55:34.787: INFO: stdout: "Name: agnhost-primary-tccpk\nNamespace: kubectl-9770\nPriority: 0\nNode: capz-wgj520-md-0-qgnj5/10.1.0.4\nStart Time: Fri, 25 Nov 2022 03:55:32 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: cni.projectcalico.org/containerID: 8f5987877385b4c20042df0cbe252c30b6bb5f86afb4c4469288352a36d43f90\n cni.projectcalico.org/podIP: 192.168.160.32/32\n cni.projectcalico.org/podIPs: 192.168.160.32/32\nStatus: Running\nIP: 192.168.160.32\nIPs:\n IP: 192.168.160.32\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://22aaa764096d478582a514bbbc97243378b9f48e6fb6e46a7be40af3e9d38693\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 25 Nov 2022 03:55:33 +0000\n Ready: True\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bk5gz (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-bk5gz:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-9770/agnhost-primary-tccpk to capz-wgj520-md-0-qgnj5\n Normal Pulled 1s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Nov 25 03:55:34.787: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://capz-wgj520-5558bd25.westus2.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-9770 describe rc agnhost-primary' Nov 25 03:55:35.272: INFO: stderr: "" Nov 25 03:55:35.272: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-9770\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-primary-tccpk\n" Nov 25 03:55:35.272: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://capz-wgj520-5558bd25.westus2.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-9770 describe service agnhost-primary' Nov 25 03:55:35.747: INFO: stderr: "" Nov 25 03:55:35.747: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-9770\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.110.95.12\nIPs: 10.110.95.12\nPort: <unset> 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 192.168.160.32:6379\nSession Affinity: None\nEvents: <none>\n" Nov 25 03:55:35.809: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://capz-wgj520-5558bd25.westus2.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-9770 describe node capz-wgj520-control-plane-k8564' Nov 25 03:55:36.460: INFO: stderr: "" Nov 25 03:55:36.460: INFO: stdout: "Name: capz-wgj520-control-plane-k8564\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=Standard_D2s_v3\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=westus2\n failure-domain.beta.kubernetes.io/zone=westus2-3\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=capz-wgj520-control-plane-k8564\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\n node.kubernetes.io/instance-type=Standard_D2s_v3\n topology.kubernetes.io/region=westus2\n topology.kubernetes.io/zone=westus2-3\nAnnotations: cluster.x-k8s.io/cluster-name: capz-wgj520\n cluster.x-k8s.io/cluster-namespace: default\n cluster.x-k8s.io/machine: capz-wgj520-control-plane-lxdtc\n cluster.x-k8s.io/owner-kind: KubeadmControlPlane\n cluster.x-k8s.io/owner-name: capz-wgj520-control-plane\n kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4Address: 10.0.0.4/16\n projectcalico.org/IPv4VXLANTunnelAddr: 192.168.228.0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 25 Nov 2022 03:35:57 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: capz-wgj520-control-plane-k8564\n AcquireTime: <unset>\n RenewTime: Fri, 25 Nov 2022 03:55:29 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 25 Nov 2022 03:39:50 +0000 Fri, 25 Nov 2022 03:39:50 +0000 RouteCreated RouteController created a route\n MemoryPressure False Fri, 25 Nov 2022 03:54:30 +0000 Fri, 25 Nov 2022 03:35:41 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 25 Nov 2022 03:54:30 +0000 Fri, 25 Nov 2022 03:35:41 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 25 Nov 2022 03:54:30 +0000 Fri, 25 Nov 2022 03:35:41 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 25 Nov 2022 03:54:30 +0000 Fri, 25 Nov 2022 03:36:46 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.0.0.4\n Hostname: capz-wgj520-control-plane-k8564\nCapacity:\n cpu: 2\n ephemeral-storage: 129886128Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 8149144Ki\n pods: 110\nAllocatable:\n cpu: 2\n ephemeral-storage: 119703055367\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 8046744Ki\n pods: 110\nSystem Info:\n Machine ID: a6bcaddae3b14715acf6cf069c9f75b2\n System UUID: 187c2889-efc7-b74a-8fa1-df120178740a\n Boot ID: 6cc3cf3e-26f7-491e-9f7a-2504b8ee6592\n Kernel Version: 5.4.0-1091-azure\n OS Image: Ubuntu 18.04.6 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.2\n Kubelet Version: v1.23.15-rc.0.17+e3eac9677a785e\n Kube-Proxy Version: v1.23.15-rc.0.17+e3eac9677a785e\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nProviderID: azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-wgj520/providers/Microsoft.Compute/virtualMachines/capz-wgj520-control-plane-k8564\nNon-terminated Pods: (12 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system calico-kube-controllers-85f479877b-72rwf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19m\n kube-system calico-node-8dhl4 250m (12%) 0 (0%) 0 (0%) 0 (0%) 19m\n kube-system cloud-controller-manager-56dc498df9-tctpx 100m (5%) 4 (200%) 128Mi (1%) 2Gi (26%) 17m\n kube-system cloud-node-manager-tvq9j 50m (2%) 2 (100%) 50Mi (0%) 512Mi (6%) 17m\n kube-system coredns-bd6b6df9f-9p7df 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 19m\n kube-system coredns-bd6b6df9f-zwcs7 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 19m\n kube-system etcd-capz-wgj520-control-plane-k8564 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 19m\n kube-system kube-apiserver-capz-wgj520-control-plane-k8564 250m (12%) 0 (0%) 0 (0%) 0 (0%) 19m\n kube-system kube-controller-manager-capz-wgj520-control-plane-k8564 200m (10%) 0 (0%) 0 (0%) 0 (0%) 19m\n kube-system kube-proxy-vfbsg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19m\n kube-system kube-scheduler-capz-wgj520-control-plane-k8564 100m (5%) 0 (0%) 0 (0%) 0 (0%) 19m\n kube-system metrics-server-7bdcf69694-tbl5f 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 19m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1350m (67%) 6 (300%)\n memory 618Mi (7%) 2900Mi (36%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 19m kube-proxy \n Warning InvalidDiskCapacity 19m kubelet invalid capacity 0 on image filesystem\n Normal NodeHasSufficientMemory 19m (x7 over 19m) kubelet Node capz-wgj520-control-plane-k8564 status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 19m (x6 over 19m) kubelet Node capz-wgj520-control-plane-k8564 status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 19m (x6 over 19m) kubelet Node capz-wgj520-control-plane-k8564 status is now: NodeHasSufficientPID\n Normal NodeAllocatableEnforced 19m kubelet Updated Node Allocatable limit across pods\n Normal Starting 19m kubelet Starting kubelet.\n Normal Starting 19m kubelet Starting kubelet.\n Warning InvalidDiskCapacity 19m kubelet invalid capacity 0 on image filesystem\n Normal NodeHasSufficientMemory 19m kubelet Node capz-wgj520-control-plane-k8564 status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 19m kubelet Node capz-wgj520-control-plane-k8564 status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 19m kubelet Node capz-wgj520-control-plane-k8564 status is now: NodeHasSufficientPID\n Normal NodeAllocatableEnforced 19m kubelet Updated Node Allocatable limit across pods\n Normal NodeReady 18m kubelet Node capz-wgj520-control-plane-k8564 status is now: NodeReady\n" Nov 25 03:55:36.461: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://capz-wgj520-5558bd25.westus2.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-9770 describe namespace kubectl-9770' Nov 25 03:55:36.980: INFO: stderr: "" Nov 25 03:55:36.981: INFO: stdout: "Name: kubectl-9770\nLabels: e2e-framework=kubectl\n e2e-run=60dd02c7-85b4-4488-8e4f-75486479adc9\n kubernetes.io/metadata.name=kubectl-9770\nAnnotations: <none>\nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:55:36.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-9770" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":335,"completed":56,"skipped":1018,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Secrets[0m [1mshould be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Secrets ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating secret with name secret-test-194fc957-d94f-4ad2-9a6c-8d5227ed1776 [1mSTEP[0m: Creating a pod to test consume secrets Nov 25 03:55:37.599: INFO: Waiting up to 5m0s for pod "pod-secrets-3a1a32c7-5ccc-4ebe-ae2a-3ca0b12cbc7d" in namespace "secrets-3891" to be "Succeeded or Failed" Nov 25 03:55:37.654: INFO: Pod "pod-secrets-3a1a32c7-5ccc-4ebe-ae2a-3ca0b12cbc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 54.94216ms Nov 25 03:55:39.709: INFO: Pod "pod-secrets-3a1a32c7-5ccc-4ebe-ae2a-3ca0b12cbc7d": Phase="Running", Reason="", readiness=false. Elapsed: 2.109943636s Nov 25 03:55:41.764: INFO: Pod "pod-secrets-3a1a32c7-5ccc-4ebe-ae2a-3ca0b12cbc7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.16510679s [1mSTEP[0m: Saw pod success Nov 25 03:55:41.764: INFO: Pod "pod-secrets-3a1a32c7-5ccc-4ebe-ae2a-3ca0b12cbc7d" satisfied condition "Succeeded or Failed" Nov 25 03:55:41.818: INFO: Trying to get logs from node capz-wgj520-md-0-spq5f pod pod-secrets-3a1a32c7-5ccc-4ebe-ae2a-3ca0b12cbc7d container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Nov 25 03:55:41.987: INFO: Waiting for pod pod-secrets-3a1a32c7-5ccc-4ebe-ae2a-3ca0b12cbc7d to disappear Nov 25 03:55:42.042: INFO: Pod pod-secrets-3a1a32c7-5ccc-4ebe-ae2a-3ca0b12cbc7d no longer exists [AfterEach] [sig-storage] Secrets /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:55:42.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-3891" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":57,"skipped":1031,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] DNS[0m [1mshould provide DNS for pods for Subdomain [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] DNS ... skipping 18 lines ... Nov 25 03:55:44.984: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:55:45.038: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:55:45.092: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:55:45.146: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:55:45.200: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:55:45.254: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:55:45.308: INFO: Lookups using dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4286.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4286.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local jessie_udp@dns-test-service-2.dns-4286.svc.cluster.local] Nov 25 03:55:50.362: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:55:50.416: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:55:50.579: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:55:50.633: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:55:50.740: INFO: Lookups using dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local] Nov 25 03:55:55.363: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:55:55.417: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:55:55.587: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:55:55.642: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:55:55.751: INFO: Lookups using dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local] Nov 25 03:56:00.363: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:56:00.417: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:56:00.578: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:56:00.633: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:56:00.741: INFO: Lookups using dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local] Nov 25 03:56:05.363: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:56:05.417: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:56:05.585: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:56:05.639: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:56:05.748: INFO: Lookups using dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local] Nov 25 03:56:10.363: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:56:10.419: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:56:10.581: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:56:10.636: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local from pod dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6: the server could not find the requested resource (get pods dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6) Nov 25 03:56:10.745: INFO: Lookups using dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local] Nov 25 03:56:15.745: INFO: DNS probes using dns-4286/dns-test-aa642944-7562-4ce9-8cf8-1b71989d07c6 succeeded [1mSTEP[0m: deleting the pod [1mSTEP[0m: deleting the test headless service [AfterEach] [sig-network] DNS /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:56:15.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "dns-4286" for this suite. [32m•[0m{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":335,"completed":58,"skipped":1040,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] Deployment[0m [1mdeployment should support proportional scaling [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] Deployment ... skipping 39 lines ... Nov 25 03:56:26.040: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-5d9fdcc779 deployment-2605 165620db-39cf-43d3-a780-03998c780ada 6375 3 2022-11-25 03:56:16 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 0fbe8471-6c3b-40e1-a117-ef2839483c34 0xc005b968e7 0xc005b968e8}] [] [{kube-controller-manager Update apps/v1 2022-11-25 03:56:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0fbe8471-6c3b-40e1-a117-ef2839483c34\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-11-25 03:56:18 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005b96988 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Nov 25 03:56:26.118: INFO: Pod "webserver-deployment-566f96c878-2lf9g" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-2lf9g webserver-deployment-566f96c878- deployment-2605 76ac687a-f920-456c-bc4c-68a481656369 6371 0 2022-11-25 03:56:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 23d36e1e-374e-4a89-8234-f459eadc1456 0xc005b96e50 0xc005b96e51}] [] [{kube-controller-manager Update v1 2022-11-25 03:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23d36e1e-374e-4a89-8234-f459eadc1456\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zpmnr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zpmnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-wgj520-md-0-qgnj5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 25 03:56:26.118: INFO: Pod "webserver-deployment-566f96c878-7gjz2" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-7gjz2 webserver-deployment-566f96c878- deployment-2605 fac2fff4-8f33-45d6-82e3-d39cf758bc84 6370 0 2022-11-25 03:56:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 23d36e1e-374e-4a89-8234-f459eadc1456 0xc005b96fc0 0xc005b96fc1}] [] [{kube-controller-manager Update v1 2022-11-25 03:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23d36e1e-374e-4a89-8234-f459eadc1456\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-11-25 03:56:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-h6k4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h6k4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-wgj520-md-0-spq5f,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:,StartTime:2022-11-25 03:56:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 25 03:56:26.118: INFO: Pod "webserver-deployment-566f96c878-8gptb" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-8gptb webserver-deployment-566f96c878- deployment-2605 17b8345f-6919-45dc-9ef1-bb2fc63db908 6306 0 2022-11-25 03:56:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[cni.projectcalico.org/containerID:32a7ae4363fca8bf6fa1ad637413aaf1506df095ed67410658d1838c4a933348 cni.projectcalico.org/podIP:192.168.160.39/32 cni.projectcalico.org/podIPs:192.168.160.39/32] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 23d36e1e-374e-4a89-8234-f459eadc1456 0xc005b97190 0xc005b97191}] [] [{Go-http-client Update v1 2022-11-25 03:56:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2022-11-25 03:56:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23d36e1e-374e-4a89-8234-f459eadc1456\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-11-25 03:56:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.160.39\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8fvhs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8fvhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-wgj520-md-0-qgnj5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.4,PodIP:192.168.160.39,StartTime:2022-11-25 03:56:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.160.39,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 25 03:56:26.119: INFO: Pod "webserver-deployment-566f96c878-8n2k2" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-8n2k2 webserver-deployment-566f96c878- deployment-2605 1139018b-509b-45b5-9b09-f8aade590f95 6393 0 2022-11-25 03:56:22 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[cni.projectcalico.org/containerID:666413f064f3f217a04dae9194877272b2ac89cafb45550023c3c279b72eb9c2 cni.projectcalico.org/podIP:192.168.13.234/32 cni.projectcalico.org/podIPs:192.168.13.234/32] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 23d36e1e-374e-4a89-8234-f459eadc1456 0xc005b973b0 0xc005b973b1}] [] [{kube-controller-manager Update v1 2022-11-25 03:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23d36e1e-374e-4a89-8234-f459eadc1456\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-11-25 03:56:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2022-11-25 03:56:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.13.234\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4fwc8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4fwc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-wgj520-md-0-spq5f,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:192.168.13.234,StartTime:2022-11-25 03:56:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "webserver:404",},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.13.234,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 25 03:56:26.119: INFO: Pod "webserver-deployment-566f96c878-c7zhc" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-c7zhc webserver-deployment-566f96c878- deployment-2605 2536ddb8-9828-4744-bb52-7fa1c77ab8c2 6364 0 2022-11-25 03:56:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 23d36e1e-374e-4a89-8234-f459eadc1456 0xc005b975e0 0xc005b975e1}] [] [{kube-controller-manager Update v1 2022-11-25 03:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23d36e1e-374e-4a89-8234-f459eadc1456\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gqf9q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gqf9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-wgj520-md-0-spq5f,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 25 03:56:26.119: INFO: Pod "webserver-deployment-566f96c878-g5dnk" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-g5dnk webserver-deployment-566f96c878- deployment-2605 2d5089c7-d934-4344-84b3-4015bf6deaf5 6383 0 2022-11-25 03:56:22 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[cni.projectcalico.org/containerID:edb3a5e40c2582f9dd3649dc76483ffcbe148da24ad15ac8556be797f2136dfc cni.projectcalico.org/podIP:192.168.13.235/32 cni.projectcalico.org/podIPs:192.168.13.235/32] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 23d36e1e-374e-4a89-8234-f459eadc1456 0xc005b97740 0xc005b97741}] [] [{kube-controller-manager Update v1 2022-11-25 03:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23d36e1e-374e-4a89-8234-f459eadc1456\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-11-25 03:56:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2022-11-25 03:56:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.13.235\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-g24vd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g24vd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-wgj520-md-0-spq5f,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:192.168.13.235,StartTime:2022-11-25 03:56:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.13.235,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 25 03:56:26.119: INFO: Pod "webserver-deployment-566f96c878-l55k6" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-l55k6 webserver-deployment-566f96c878- deployment-2605 1822daf0-6d21-4552-9e45-50481a9ef32b 6344 0 2022-11-25 03:56:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 23d36e1e-374e-4a89-8234-f459eadc1456 0xc005b97970 0xc005b97971}] [] [{kube-controller-manager Update v1 2022-11-25 03:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23d36e1e-374e-4a89-8234-f459eadc1456\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8jl6h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8jl6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-wgj520-md-0-qgnj5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 25 03:56:26.120: INFO: Pod "webserver-deployment-566f96c878-nqh44" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-nqh44 webserver-deployment-566f96c878- deployment-2605 14148bc3-0925-48f6-99a0-d00f6ed7dd8b 6343 0 2022-11-25 03:56:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 23d36e1e-374e-4a89-8234-f459eadc1456 0xc005b97ad0 0xc005b97ad1}] [] [{kube-controller-manager Update v1 2022-11-25 03:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23d36e1e-374e-4a89-8234-f459eadc1456\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-96hxs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96hxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-wgj520-md-0-spq5f,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 25 03:56:26.120: INFO: Pod "webserver-deployment-566f96c878-nsz7l" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-nsz7l webserver-deployment-566f96c878- deployment-2605 66a5c685-1c95-4332-a46c-e17612d8dffb 6303 0 2022-11-25 03:56:22 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[cni.projectcalico.org/containerID:f70c64d8878bfbaa07c2158444a080f857f54618eb06bd3d48235660e7f852bf cni.projectcalico.org/podIP:192.168.160.38/32 cni.projectcalico.org/podIPs:192.168.160.38/32] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 23d36e1e-374e-4a89-8234-f459eadc1456 0xc005b97c30 0xc005b97c31}] [] [{kube-controller-manager Update v1 2022-11-25 03:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23d36e1e-374e-4a89-8234-f459eadc1456\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-11-25 03:56:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2022-11-25 03:56:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.160.38\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dr4fd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dr4fd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-wgj520-md-0-qgnj5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.4,PodIP:192.168.160.38,StartTime:2022-11-25 03:56:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.160.38,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 25 03:56:26.120: INFO: Pod "webserver-deployment-566f96c878-qrrdm" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-qrrdm webserver-deployment-566f96c878- deployment-2605 18dd18d3-f132-4aaf-97f1-0b928c14f236 6357 0 2022-11-25 03:56:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 23d36e1e-374e-4a89-8234-f459eadc1456 0xc005b97e60 0xc005b97e61}] [] [{kube-controller-manager Update v1 2022-11-25 03:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23d36e1e-374e-4a89-8234-f459eadc1456\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-85r6v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-85r6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-wgj520-md-0-qgnj5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 25 03:56:26.121: INFO: Pod "webserver-deployment-566f96c878-r8tjq" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-r8tjq webserver-deployment-566f96c878- deployment-2605 62d871e1-e7d1-45d9-9044-a20405a18070 6374 0 2022-11-25 03:56:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 23d36e1e-374e-4a89-8234-f459eadc1456 0xc005b97fc0 0xc005b97fc1}] [] [{kube-controller-manager Update v1 2022-11-25 03:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23d36e1e-374e-4a89-8234-f459eadc1456\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mxtzb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mxtzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-wgj520-md-0-spq5f,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 25 03:56:26.121: INFO: Pod "webserver-deployment-566f96c878-txrnp" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-txrnp webserver-deployment-566f96c878- deployment-2605 1b677998-b65b-4987-9996-e18b94ea56c2 6373 0 2022-11-25 03:56:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 23d36e1e-374e-4a89-8234-f459eadc1456 0xc005b42120 0xc005b42121}] [] [{kube-controller-manager Update v1 2022-11-25 03:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23d36e1e-374e-4a89-8234-f459eadc1456\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-txb6b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-txb6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-wgj520-md-0-qgnj5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} ... skipping 40 lines ... Nov 25 03:56:26.126: INFO: Pod "webserver-deployment-5d9fdcc779-vzlfp" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-vzlfp webserver-deployment-5d9fdcc779- deployment-2605 ffd8f778-40fc-4773-aac9-2cd7a5b79dd2 6180 0 2022-11-25 03:56:16 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[cni.projectcalico.org/containerID:31a4cdd0ebe3a1657ca04b455eabd8b2085060353425bf4f425c150ae674cb21 cni.projectcalico.org/podIP:192.168.13.232/32 cni.projectcalico.org/podIPs:192.168.13.232/32] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 165620db-39cf-43d3-a780-03998c780ada 0xc005af8430 0xc005af8431}] [] [{kube-controller-manager Update v1 2022-11-25 03:56:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"165620db-39cf-43d3-a780-03998c780ada\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-11-25 03:56:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2022-11-25 03:56:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.13.232\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zmw6z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zmw6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-wgj520-md-0-spq5f,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-11-25 03:56:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:192.168.13.232,StartTime:2022-11-25 03:56:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-11-25 03:56:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://8dd34ebfb781af0782d18e4c3f2e7c6b517e112473a8ceedba2127fe1d75896e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.13.232,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:56:26.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "deployment-2605" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":335,"completed":59,"skipped":1051,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Container Lifecycle Hook[0m [90mwhen create a pod with lifecycle hook[0m [1mshould execute prestop http hook properly [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Container Lifecycle Hook ... skipping 25 lines ... Nov 25 03:56:43.305: INFO: Pod pod-with-prestop-http-hook no longer exists [1mSTEP[0m: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:56:43.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-lifecycle-hook-4671" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":335,"completed":60,"skipped":1081,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] ResourceQuota[0m [1mshould create a ResourceQuota and capture the life of a secret. [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] ResourceQuota ... skipping 14 lines ... [1mSTEP[0m: Deleting a secret [1mSTEP[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:57:01.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "resourcequota-485" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":335,"completed":61,"skipped":1094,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-node] InitContainer [NodeConformance][0m [1mshould not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Nov 25 03:57:01.423: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig [1mSTEP[0m: Building a namespace api object, basename init-container [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: creating the pod Nov 25 03:57:01.799: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:57:07.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "init-container-6908" for this suite. [32m•[0m{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":335,"completed":62,"skipped":1095,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected downwardAPI[0m [1mshould update annotations on modification [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected downwardAPI ... skipping 12 lines ... Nov 25 03:57:09.728: INFO: The status of Pod annotationupdate8d11cb08-0098-478d-b01c-742bbca99e51 is Running (Ready = true) Nov 25 03:57:10.456: INFO: Successfully updated pod "annotationupdate8d11cb08-0098-478d-b01c-742bbca99e51" [AfterEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:57:12.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-9235" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":335,"completed":63,"skipped":1105,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] DNS[0m [1mshould provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] DNS ... skipping 17 lines ... [1mSTEP[0m: deleting the pod [AfterEach] [sig-network] DNS /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:57:29.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "dns-3748" for this suite. [32m•[0m{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":335,"completed":64,"skipped":1119,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Watchers[0m [1mshould observe an object deletion if it stops meeting the requirements of the selector [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Watchers ... skipping 23 lines ... Nov 25 03:57:40.770: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9342 779a4ac6-08b2-4943-b266-4f850d1748a3 7031 0 2022-11-25 03:57:30 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-11-25 03:57:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 25 03:57:40.771: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9342 779a4ac6-08b2-4943-b266-4f850d1748a3 7032 0 2022-11-25 03:57:30 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-11-25 03:57:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:57:40.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "watch-9342" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":335,"completed":65,"skipped":1148,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Kubelet[0m [90mwhen scheduling a busybox command in a pod[0m [1mshould print the output to logs [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Kubelet ... skipping 10 lines ... Nov 25 03:57:41.376: INFO: The status of Pod busybox-scheduling-9f3f6f67-17ed-4d49-9f7f-39d34397ecc9 is Pending, waiting for it to be Running (with Ready = true) Nov 25 03:57:43.432: INFO: The status of Pod busybox-scheduling-9f3f6f67-17ed-4d49-9f7f-39d34397ecc9 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:57:43.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubelet-test-6896" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":335,"completed":66,"skipped":1187,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-storage] Secrets[0m [1mshould be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Secrets ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating secret with name secret-test-41ed55ec-584a-455d-b968-8725bd5c45db [1mSTEP[0m: Creating a pod to test consume secrets Nov 25 03:57:44.375: INFO: Waiting up to 5m0s for pod "pod-secrets-cb52ff7d-109a-47ba-89f8-6297c78b7cf9" in namespace "secrets-5371" to be "Succeeded or Failed" Nov 25 03:57:44.429: INFO: Pod "pod-secrets-cb52ff7d-109a-47ba-89f8-6297c78b7cf9": Phase="Pending", Reason="", readiness=false. Elapsed: 53.25167ms Nov 25 03:57:46.484: INFO: Pod "pod-secrets-cb52ff7d-109a-47ba-89f8-6297c78b7cf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108278822s Nov 25 03:57:48.540: INFO: Pod "pod-secrets-cb52ff7d-109a-47ba-89f8-6297c78b7cf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.164139136s [1mSTEP[0m: Saw pod success Nov 25 03:57:48.540: INFO: Pod "pod-secrets-cb52ff7d-109a-47ba-89f8-6297c78b7cf9" satisfied condition "Succeeded or Failed" Nov 25 03:57:48.594: INFO: Trying to get logs from node capz-wgj520-md-0-spq5f pod pod-secrets-cb52ff7d-109a-47ba-89f8-6297c78b7cf9 container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Nov 25 03:57:48.714: INFO: Waiting for pod pod-secrets-cb52ff7d-109a-47ba-89f8-6297c78b7cf9 to disappear Nov 25 03:57:48.768: INFO: Pod pod-secrets-cb52ff7d-109a-47ba-89f8-6297c78b7cf9 no longer exists [AfterEach] [sig-storage] Secrets /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:57:48.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-5371" for this suite. [1mSTEP[0m: Destroying namespace "secret-namespace-6447" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":335,"completed":67,"skipped":1188,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Security Context[0m [1mshould support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Security Context ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Nov 25 03:57:49.376: INFO: Waiting up to 5m0s for pod "security-context-9f0bc4ec-b85d-4c67-94ef-c5da19f2323b" in namespace "security-context-3421" to be "Succeeded or Failed" Nov 25 03:57:49.429: INFO: Pod "security-context-9f0bc4ec-b85d-4c67-94ef-c5da19f2323b": Phase="Pending", Reason="", readiness=false. Elapsed: 53.218446ms Nov 25 03:57:51.485: INFO: Pod "security-context-9f0bc4ec-b85d-4c67-94ef-c5da19f2323b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108811704s Nov 25 03:57:53.540: INFO: Pod "security-context-9f0bc4ec-b85d-4c67-94ef-c5da19f2323b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.163877328s [1mSTEP[0m: Saw pod success Nov 25 03:57:53.540: INFO: Pod "security-context-9f0bc4ec-b85d-4c67-94ef-c5da19f2323b" satisfied condition "Succeeded or Failed" Nov 25 03:57:53.594: INFO: Trying to get logs from node capz-wgj520-md-0-spq5f pod security-context-9f0bc4ec-b85d-4c67-94ef-c5da19f2323b container test-container: <nil> [1mSTEP[0m: delete the pod Nov 25 03:57:53.712: INFO: Waiting for pod security-context-9f0bc4ec-b85d-4c67-94ef-c5da19f2323b to disappear Nov 25 03:57:53.766: INFO: Pod security-context-9f0bc4ec-b85d-4c67-94ef-c5da19f2323b no longer exists [AfterEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:57:53.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-3421" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":335,"completed":68,"skipped":1231,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Container Runtime[0m [90mblackbox test[0m [0mwhen running a container with a new image[0m [1mshould be able to pull image [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382[0m [BeforeEach] [sig-node] Container Runtime ... skipping 9 lines ... [1mSTEP[0m: check the container status [1mSTEP[0m: delete the container [AfterEach] [sig-node] Container Runtime /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:57:56.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-runtime-2112" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":335,"completed":69,"skipped":1235,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mKubectl diff[0m [1mshould check if kubectl diff finds a difference for Deployments [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 18 lines ... Nov 25 03:57:58.517: INFO: stderr: "" Nov 25 03:57:58.517: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:57:58.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-6263" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":335,"completed":70,"skipped":1244,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Secrets[0m [1mshould be immutable if `immutable` field is set [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Secrets ... skipping 6 lines ... [It] should be immutable if `immutable` field is set [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-storage] Secrets /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:57:59.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-933" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":335,"completed":71,"skipped":1257,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Pods[0m [1mshould get a host IP [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Pods ... skipping 12 lines ... Nov 25 03:58:02.172: INFO: The status of Pod pod-hostip-cae146e4-e1bf-40ca-8688-7d7ccdc661b5 is Running (Ready = true) Nov 25 03:58:02.279: INFO: Pod pod-hostip-cae146e4-e1bf-40ca-8688-7d7ccdc661b5 has hostIP: 10.1.0.5 [AfterEach] [sig-node] Pods /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:58:02.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-4843" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":335,"completed":72,"skipped":1260,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] ResourceQuota[0m [1mshould verify ResourceQuota with best effort scope. [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] ResourceQuota ... skipping 20 lines ... [1mSTEP[0m: Deleting the pod [1mSTEP[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:58:19.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "resourcequota-642" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":335,"completed":73,"skipped":1268,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] ConfigMap[0m [1mshould be consumable via environment variable [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] ConfigMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap configmap-3565/configmap-test-3b0de217-0a85-47aa-9dde-9414ce070d1c [1mSTEP[0m: Creating a pod to test consume configMaps Nov 25 03:58:20.183: INFO: Waiting up to 5m0s for pod "pod-configmaps-bdf98ab5-381b-491f-a14d-181c9446c726" in namespace "configmap-3565" to be "Succeeded or Failed" Nov 25 03:58:20.236: INFO: Pod "pod-configmaps-bdf98ab5-381b-491f-a14d-181c9446c726": Phase="Pending", Reason="", readiness=false. Elapsed: 53.313076ms Nov 25 03:58:22.291: INFO: Pod "pod-configmaps-bdf98ab5-381b-491f-a14d-181c9446c726": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108537173s Nov 25 03:58:24.346: INFO: Pod "pod-configmaps-bdf98ab5-381b-491f-a14d-181c9446c726": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.163344599s [1mSTEP[0m: Saw pod success Nov 25 03:58:24.346: INFO: Pod "pod-configmaps-bdf98ab5-381b-491f-a14d-181c9446c726" satisfied condition "Succeeded or Failed" Nov 25 03:58:24.401: INFO: Trying to get logs from node capz-wgj520-md-0-qgnj5 pod pod-configmaps-bdf98ab5-381b-491f-a14d-181c9446c726 container env-test: <nil> [1mSTEP[0m: delete the pod Nov 25 03:58:24.524: INFO: Waiting for pod pod-configmaps-bdf98ab5-381b-491f-a14d-181c9446c726 to disappear Nov 25 03:58:24.577: INFO: Pod pod-configmaps-bdf98ab5-381b-491f-a14d-181c9446c726 no longer exists [AfterEach] [sig-node] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:58:24.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-3565" for this suite. [32m•[0m{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":335,"completed":74,"skipped":1273,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir volumes[0m [1mshould support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] EmptyDir volumes ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test emptydir 0777 on node default medium Nov 25 03:58:25.127: INFO: Waiting up to 5m0s for pod "pod-3dc01063-7240-4702-942a-f114449ebdb9" in namespace "emptydir-5490" to be "Succeeded or Failed" Nov 25 03:58:25.180: INFO: Pod "pod-3dc01063-7240-4702-942a-f114449ebdb9": Phase="Pending", Reason="", readiness=false. Elapsed: 53.342416ms Nov 25 03:58:27.235: INFO: Pod "pod-3dc01063-7240-4702-942a-f114449ebdb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108191115s Nov 25 03:58:29.290: INFO: Pod "pod-3dc01063-7240-4702-942a-f114449ebdb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.162647213s [1mSTEP[0m: Saw pod success Nov 25 03:58:29.290: INFO: Pod "pod-3dc01063-7240-4702-942a-f114449ebdb9" satisfied condition "Succeeded or Failed" Nov 25 03:58:29.344: INFO: Trying to get logs from node capz-wgj520-md-0-qgnj5 pod pod-3dc01063-7240-4702-942a-f114449ebdb9 container test-container: <nil> [1mSTEP[0m: delete the pod Nov 25 03:58:29.460: INFO: Waiting for pod pod-3dc01063-7240-4702-942a-f114449ebdb9 to disappear Nov 25 03:58:29.514: INFO: Pod pod-3dc01063-7240-4702-942a-f114449ebdb9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:58:29.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-5490" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":75,"skipped":1279,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] Deployment[0m [1mshould run the lifecycle of a Deployment [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] Deployment ... skipping 95 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Nov 25 03:58:40.050: INFO: Log out all the ReplicaSets if there is no deployment created [AfterEach] [sig-apps] Deployment /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:58:40.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "deployment-2887" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":335,"completed":76,"skipped":1299,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Kubelet[0m [90mwhen scheduling a busybox command that always fails in a pod[0m [1mshould be possible to delete [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Kubelet ... skipping 10 lines ... [It] should be possible to delete [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Kubelet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:58:40.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubelet-test-2121" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":335,"completed":77,"skipped":1302,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] DNS[0m [1mshould provide DNS for the cluster [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] DNS ... skipping 17 lines ... [1mSTEP[0m: deleting the pod [AfterEach] [sig-network] DNS /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 03:58:45.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "dns-3536" for this suite. [32m•[0m{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":335,"completed":78,"skipped":1354,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected configMap[0m [1moptional updates should be reflected in volume [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected configMap ... skipping 16 lines ... [1mSTEP[0m: Creating configMap with name cm-test-opt-create-155ab7f6-bb93-4c72-8a3e-24ac54031a29 [1mSTEP[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 04:00:13.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-1711" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":335,"completed":79,"skipped":1385,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicationController[0m [1mshould serve a basic image on each replica with a public image [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicationController ... skipping 14 lines ... Nov 25 04:00:16.161: INFO: Trying to dial the pod Nov 25 04:00:21.325: INFO: Controller my-hostname-basic-8cce98ff-d7ae-417c-968e-56d9cc640d1e: Got expected result from replica 1 [my-hostname-basic-8cce98ff-d7ae-417c-968e-56d9cc640d1e-dwm9s]: "my-hostname-basic-8cce98ff-d7ae-417c-968e-56d9cc640d1e-dwm9s", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 04:00:21.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replication-controller-6116" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":335,"completed":80,"skipped":1409,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin][0m [1mshould be able to convert from CR v1 to CR v2 [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] ... skipping 21 lines ... [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 04:00:28.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "crd-webhook-8809" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 [32m•[0m{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":335,"completed":81,"skipped":1491,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Networking[0m [90mGranular Checks: Pods[0m [1mshould function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Networking ... skipping 37 lines ... Nov 25 04:00:55.518: INFO: ExecWithOptions: execute(POST https://capz-wgj520-5558bd25.westus2.cloudapp.azure.com:6443/api/v1/namespaces/pod-network-test-5181/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.13.251+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Nov 25 04:00:56.991: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 04:00:56.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pod-network-test-5181" for this suite. [32m•[0m{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":82,"skipped":1506,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-auth] ServiceAccounts[0m [1mshould guarantee kube-root-ca.crt exist in any namespace [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-auth] ServiceAccounts ... skipping 13 lines ... [1mSTEP[0m: waiting for the root ca configmap reconciled Nov 25 04:00:58.758: INFO: Reconciled root ca configmap in namespace "svcaccounts-5549" [AfterEach] [sig-auth] ServiceAccounts /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 04:00:58.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "svcaccounts-5549" for this suite. [32m•[0m{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":335,"completed":83,"skipped":1523,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Services[0m [1mshould be able to change the type from ClusterIP to ExternalName [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Services ... skipping 25 lines ... [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 04:01:09.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "services-5992" for this suite. [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 [32m•[0m{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":335,"completed":84,"skipped":1537,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mshould be able to deny attaching pod [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 24 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 04:01:16.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-602" for this suite. [1mSTEP[0m: Destroying namespace "webhook-602-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":335,"completed":85,"skipped":1549,"failed":0} [90m------------------------------[0m [0m[sig-node] RuntimeClass[0m [1m should support RuntimeClasses API operations [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] RuntimeClass ... skipping 19 lines ... [1mSTEP[0m: deleting [1mSTEP[0m: deleting a collection [AfterEach] [sig-node] RuntimeClass /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 04:01:18.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "runtimeclass-9202" for this suite. [32m•[0m{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":335,"completed":86,"skipped":1549,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] DNS[0m [1mshould provide DNS for pods for Hostname [LinuxOnly] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] DNS ... skipping 19 lines ... [1mSTEP[0m: deleting the pod [1mSTEP[0m: deleting the test headless service [AfterEach] [sig-network] DNS /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 04:01:21.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "dns-4204" for this suite. [32m•[0m{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":335,"completed":87,"skipped":1555,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicationController[0m [1mshould test the lifecycle of a ReplicationController [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicationController ... skipping 27 lines ... [1mSTEP[0m: deleting ReplicationControllers by collection [1mSTEP[0m: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 04:01:25.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replication-controller-3578" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":335,"completed":88,"skipped":1600,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Container Lifecycle Hook[0m [90mwhen create a pod with lifecycle hook[0m [1mshould execute prestop exec hook properly [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Container Lifecycle Hook ... skipping 20 lines ... Nov 25 04:01:32.312: INFO: Pod pod-with-prestop-exec-hook no longer exists [1mSTEP[0m: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 04:01:32.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-lifecycle-hook-2711" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":335,"completed":89,"skipped":1634,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] ResourceQuota[0m [1mshould create a ResourceQuota and capture the life of a replica set. [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] ResourceQuota ... skipping 13 lines ... [1mSTEP[0m: Deleting a ReplicaSet [1mSTEP[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 04:01:44.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "resourcequota-6768" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":335,"completed":90,"skipped":1653,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould not be blocked by dependency circle [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 9 lines ... Nov 25 04:01:45.058: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"62d5dbd2-8786-406a-ae37-7d85f733b4b4", Controller:(*bool)(0xc0056e924e), BlockOwnerDeletion:(*bool)(0xc0056e924f)}} Nov 25 04:01:45.116: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"2439e5b5-7cc5-4b14-9e38-c8b31bd52d40", Controller:(*bool)(0xc0056e9506), BlockOwnerDeletion:(*bool)(0xc0056e9507)}} [AfterEach] [sig-api-machinery] Garbage collector /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 04:01:50.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-103" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":335,"completed":91,"skipped":1736,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] HostPath[0m [1mshould support subPath [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93[0m [BeforeEach] [sig-storage] HostPath ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support subPath [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 [1mSTEP[0m: Creating a pod to test hostPath subPath Nov 25 04:01:50.780: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6214" to be "Succeeded or Failed" Nov 25 04:01:50.834: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 53.944264ms Nov 25 04:01:52.889: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109611326s Nov 25 04:01:54.944: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.164697652s [1mSTEP[0m: Saw pod success Nov 25 04:01:54.944: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Nov 25 04:01:54.998: INFO: Trying to get logs from node capz-wgj520-md-0-spq5f pod pod-host-path-test container test-container-2: <nil> [1mSTEP[0m: delete the pod Nov 25 04:01:55.119: INFO: Waiting for pod pod-host-path-test to disappear Nov 25 04:01:55.174: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 04:01:55.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "hostpath-6214" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":335,"completed":92,"skipped":1746,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] EndpointSliceMirroring[0m [1mshould mirror a custom Endpoints resource through create update and delete [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] EndpointSliceMirroring ... skipping 11 lines ... [1mSTEP[0m: mirroring an update to a custom Endpoint [1mSTEP[0m: mirroring deletion of a custom Endpoint [AfterEach] [sig-network] EndpointSliceMirroring /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 04:01:56.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "endpointslicemirroring-1344" for this suite. [32m•[0m{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":335,"completed":93,"skipped":1766,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] InitContainer [NodeConformance][0m [1mshould not start app containers if init containers fail on a RestartAlways pod [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Nov 25 04:01:56.190: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig [1mSTEP[0m: Building a namespace api object, basename init-container [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: creating the pod Nov 25 04:01:56.565: INFO: PodSpec: initContainers in spec.initContainers Nov 25 04:02:38.047: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-7630c3c0-1da3-4a0d-8a67-2934e5f670a5", GenerateName:"", Namespace:"init-container-5199", SelfLink:"", UID:"9a3e0e11-8a4d-4d6e-991c-492d1d918053", ResourceVersion:"9164", Generation:0, CreationTimestamp:time.Date(2022, time.November, 25, 4, 1, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"565665522"}, Annotations:map[string]string{"cni.projectcalico.org/containerID":"bbe51936ed3806b8019cfde0538e4e5d3cb409332646b1922b00eaed3208d04f", "cni.projectcalico.org/podIP":"192.168.13.196/32", "cni.projectcalico.org/podIPs":"192.168.13.196/32"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.November, 25, 4, 1, 56, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0033fa228), Subresource:""}, v1.ManagedFieldsEntry{Manager:"Go-http-client", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.November, 25, 4, 1, 57, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0033fa258), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.November, 25, 4, 1, 57, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0033fa288), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-clchk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0028d6100), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-clchk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-clchk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-clchk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0040fe470), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"capz-wgj520-md-0-spq5f", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00467a230), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0040fe4f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0040fe510)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0040fe518), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0040fe51c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0058e4150), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 25, 4, 1, 56, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 25, 4, 1, 56, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 25, 4, 1, 56, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 25, 4, 1, 56, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.1.0.5", PodIP:"192.168.13.196", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.13.196"}}, StartTime:time.Date(2022, time.November, 25, 4, 1, 56, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0033fa2d0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00467a310)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf", ContainerID:"containerd://1de5b6895dbe234c8cb67770596c8193cef41ab7f08d855e0796d9f243cf2cd8", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0028d6180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0028d6160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.6", ImageID:"", ContainerID:"", Started:(*bool)(0xc0040fe57c)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 25 04:02:38.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "init-container-5199" for this suite. [32m•[0m{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":335,"completed":94,"skipped":1775,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Probing container[0m [1mshould *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Probing container ... skipping 8 lines ... [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating pod liveness-055c60f2-bdfd-4e14-ab6c-d84ec0963edf in namespace container-probe-4049 Nov 25 04:02:40.709: INFO: Started pod liveness-055c60f2-bdfd-4e14-ab6c-d84ec0963edf in namespace container-probe-4049 [1mSTEP[0m: checking the pod's current state and verifying that restartCount is present Nov 25 04:02:40.763: INFO: Initial restart count of pod liveness-055c60f2-bdfd-4e14-ab6c-d84ec0963edf is 0 {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:168","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2022-11-25T04:02:50Z"}