This job view page is being replaced by Spyglass soon. Check out the new job view.
PRoscr: ⚠️ Use Kubernetes 1.25 in Quick Start docs and CAPD.
Resultfailure
Tests 0 failed / 7 succeeded
Started2022-09-05 14:08
Elapsed1h11m
Revision
Refs 7156
uploadercrier
uploadercrier

No Test Failures!


Show 7 Passed Tests

Show 20 Skipped Tests

Error lines from build-log.txt

... skipping 904 lines ...
Status: Downloaded newer image for quay.io/jetstack/cert-manager-controller:v1.9.1
quay.io/jetstack/cert-manager-controller:v1.9.1
+ export GINKGO_NODES=3
+ GINKGO_NODES=3
+ export GINKGO_NOCOLOR=true
+ GINKGO_NOCOLOR=true
+ export GINKGO_ARGS=--fail-fast
+ GINKGO_ARGS=--fail-fast
+ export E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ export ARTIFACTS=/logs/artifacts
+ ARTIFACTS=/logs/artifacts
+ export SKIP_RESOURCE_CLEANUP=false
+ SKIP_RESOURCE_CLEANUP=false
... skipping 79 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6 --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition.yaml
mkdir -p /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/extension/config/default > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension/deployment.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/ginkgo-v2.1.4 -v --trace --tags=e2e --focus="\[K8s-Upgrade\]"  --nodes=3 --no-color=true --output-dir="/logs/artifacts" --junit-report="junit.e2e_suite.1.xml" --fail-fast . -- \
    -e2e.artifacts-folder="/logs/artifacts" \
    -e2e.config="/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml" \
    -e2e.skip-resource-cleanup=false -e2e.use-existing-cluster=false
go: downloading github.com/onsi/gomega v1.20.0
go: downloading k8s.io/apimachinery v0.24.2
go: downloading k8s.io/api v0.24.2
... skipping 227 lines ...
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-s5wz98-mp-0-config created
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-s5wz98-mp-0-config-cgroupfs created
    cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-s5wz98 created
    machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-s5wz98-mp-0 created
    dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-s5wz98-dmp-0 created

    Failed to get logs for Machine k8s-upgrade-and-conformance-s5wz98-gp95j-kjvn2, Cluster k8s-upgrade-and-conformance-ntr1sc/k8s-upgrade-and-conformance-s5wz98: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-9jgcx, Cluster k8s-upgrade-and-conformance-ntr1sc/k8s-upgrade-and-conformance-s5wz98: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2, Cluster k8s-upgrade-and-conformance-ntr1sc/k8s-upgrade-and-conformance-s5wz98: exit status 2
    Failed to get logs for MachinePool k8s-upgrade-and-conformance-s5wz98-mp-0, Cluster k8s-upgrade-and-conformance-ntr1sc/k8s-upgrade-and-conformance-s5wz98: exit status 2
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec 09/05/22 14:23:28.45
    INFO: Creating namespace k8s-upgrade-and-conformance-ntr1sc
    INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-ntr1sc"
... skipping 41 lines ...
    
    Running in parallel across 4 nodes
    
    Sep  5 14:31:09.450: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:31:09.464: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
    Sep  5 14:31:09.496: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
    Sep  5 14:31:09.617: INFO: The status of Pod coredns-78fcd69978-ftp52 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:09.617: INFO: The status of Pod coredns-78fcd69978-mxp7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:09.617: INFO: The status of Pod kindnet-dhc7q is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:09.617: INFO: The status of Pod kindnet-hl76v is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:09.617: INFO: The status of Pod kube-proxy-555jh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:09.617: INFO: The status of Pod kube-proxy-jqn4k is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:09.617: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
    Sep  5 14:31:09.617: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep  5 14:31:09.617: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  5 14:31:09.617: INFO: coredns-78fcd69978-ftp52  k8s-upgrade-and-conformance-s5wz98-worker-fwcp3b  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:27:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:30:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:27:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:27:26 +0000 UTC  }]
    Sep  5 14:31:09.617: INFO: coredns-78fcd69978-mxp7n  k8s-upgrade-and-conformance-s5wz98-worker-6kg7cd  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:29:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:30:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:29:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:29:19 +0000 UTC  }]
    Sep  5 14:31:09.617: INFO: kindnet-dhc7q             k8s-upgrade-and-conformance-s5wz98-worker-fwcp3b  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:25:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:30:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:25:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:25:07 +0000 UTC  }]
    Sep  5 14:31:09.617: INFO: kindnet-hl76v             k8s-upgrade-and-conformance-s5wz98-worker-6kg7cd  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:25:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:30:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:25:43 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:25:23 +0000 UTC  }]
    Sep  5 14:31:09.617: INFO: kube-proxy-555jh          k8s-upgrade-and-conformance-s5wz98-worker-fwcp3b  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:28:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:30:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:28:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:28:26 +0000 UTC  }]
    Sep  5 14:31:09.617: INFO: kube-proxy-jqn4k          k8s-upgrade-and-conformance-s5wz98-worker-6kg7cd  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:28:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:30:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:28:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:28:20 +0000 UTC  }]
    Sep  5 14:31:09.618: INFO: 
    Sep  5 14:31:11.645: INFO: The status of Pod coredns-78fcd69978-ftp52 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:11.645: INFO: The status of Pod coredns-78fcd69978-mxp7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:11.645: INFO: The status of Pod kindnet-dhc7q is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:11.645: INFO: The status of Pod kindnet-hl76v is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:11.646: INFO: The status of Pod kube-proxy-555jh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:11.646: INFO: The status of Pod kube-proxy-jqn4k is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:11.646: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
    Sep  5 14:31:11.646: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep  5 14:31:11.646: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  5 14:31:11.646: INFO: coredns-78fcd69978-ftp52  k8s-upgrade-and-conformance-s5wz98-worker-fwcp3b  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:27:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:30:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:27:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:27:26 +0000 UTC  }]
    Sep  5 14:31:11.646: INFO: coredns-78fcd69978-mxp7n  k8s-upgrade-and-conformance-s5wz98-worker-6kg7cd  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:29:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:30:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:29:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:29:19 +0000 UTC  }]
    Sep  5 14:31:11.646: INFO: kindnet-dhc7q             k8s-upgrade-and-conformance-s5wz98-worker-fwcp3b  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:25:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:30:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:25:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:25:07 +0000 UTC  }]
    Sep  5 14:31:11.646: INFO: kindnet-hl76v             k8s-upgrade-and-conformance-s5wz98-worker-6kg7cd  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:25:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:30:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:25:43 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:25:23 +0000 UTC  }]
    Sep  5 14:31:11.646: INFO: kube-proxy-555jh          k8s-upgrade-and-conformance-s5wz98-worker-fwcp3b  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:28:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:30:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:28:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:28:26 +0000 UTC  }]
    Sep  5 14:31:11.646: INFO: kube-proxy-jqn4k          k8s-upgrade-and-conformance-s5wz98-worker-6kg7cd  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:28:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:30:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:28:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:28:20 +0000 UTC  }]
    Sep  5 14:31:11.646: INFO: 
    Sep  5 14:31:13.649: INFO: The status of Pod coredns-78fcd69978-ftp52 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:13.649: INFO: The status of Pod coredns-78fcd69978-mxp7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:13.649: INFO: The status of Pod kindnet-dhc7q is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:13.649: INFO: The status of Pod kindnet-hl76v is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:13.649: INFO: The status of Pod kube-proxy-555jh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:13.649: INFO: The status of Pod kube-proxy-jqn4k is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:13.649: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
    Sep  5 14:31:13.649: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep  5 14:31:13.649: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  5 14:31:13.649: INFO: coredns-78fcd69978-ftp52  k8s-upgrade-and-conformance-s5wz98-worker-fwcp3b  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:27:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:30:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:27:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:27:26 +0000 UTC  }]
    Sep  5 14:31:13.649: INFO: coredns-78fcd69978-mxp7n  k8s-upgrade-and-conformance-s5wz98-worker-6kg7cd  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:29:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:30:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:29:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:29:19 +0000 UTC  }]
    Sep  5 14:31:13.649: INFO: kindnet-dhc7q             k8s-upgrade-and-conformance-s5wz98-worker-fwcp3b  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:25:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:30:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:25:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:25:07 +0000 UTC  }]
    Sep  5 14:31:13.649: INFO: kindnet-hl76v             k8s-upgrade-and-conformance-s5wz98-worker-6kg7cd  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:25:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:30:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:25:43 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:25:23 +0000 UTC  }]
    Sep  5 14:31:13.649: INFO: kube-proxy-555jh          k8s-upgrade-and-conformance-s5wz98-worker-fwcp3b  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:28:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:30:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:28:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:28:26 +0000 UTC  }]
    Sep  5 14:31:13.649: INFO: kube-proxy-jqn4k          k8s-upgrade-and-conformance-s5wz98-worker-6kg7cd  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:28:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:30:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:28:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:28:20 +0000 UTC  }]
    Sep  5 14:31:13.649: INFO: 
    Sep  5 14:31:15.648: INFO: The status of Pod coredns-78fcd69978-sk5n7 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:15.649: INFO: The status of Pod coredns-78fcd69978-wg8zs is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 14:31:15.649: INFO: 14 / 16 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
    Sep  5 14:31:15.649: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep  5 14:31:15.649: INFO: POD                       NODE                                                            PHASE    GRACE  CONDITIONS
    Sep  5 14:31:15.649: INFO: coredns-78fcd69978-sk5n7  k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-9jgcx  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:31:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:31:14 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:31:14 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:31:14 +0000 UTC  }]
    Sep  5 14:31:15.649: INFO: coredns-78fcd69978-wg8zs  k8s-upgrade-and-conformance-s5wz98-worker-9oo03u                Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:31:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:31:14 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:31:14 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 14:31:14 +0000 UTC  }]
    Sep  5 14:31:15.649: INFO: 
... skipping 33 lines ...
    Sep  5 14:31:17.894: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-817a0f49-8d68-49a1-9f14-2970250027f9
    STEP: Creating a pod to test consume secrets
    Sep  5 14:31:17.946: INFO: Waiting up to 5m0s for pod "pod-secrets-27f56adc-60da-47ee-b8c6-aca4f23ea15e" in namespace "secrets-7357" to be "Succeeded or Failed"

    Sep  5 14:31:17.958: INFO: Pod "pod-secrets-27f56adc-60da-47ee-b8c6-aca4f23ea15e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.875221ms
    Sep  5 14:31:20.028: INFO: Pod "pod-secrets-27f56adc-60da-47ee-b8c6-aca4f23ea15e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082621473s
    Sep  5 14:31:22.033: INFO: Pod "pod-secrets-27f56adc-60da-47ee-b8c6-aca4f23ea15e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087782472s
    Sep  5 14:31:24.039: INFO: Pod "pod-secrets-27f56adc-60da-47ee-b8c6-aca4f23ea15e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.09364518s
    STEP: Saw pod success
    Sep  5 14:31:24.040: INFO: Pod "pod-secrets-27f56adc-60da-47ee-b8c6-aca4f23ea15e" satisfied condition "Succeeded or Failed"

    Sep  5 14:31:24.045: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-9jgcx pod pod-secrets-27f56adc-60da-47ee-b8c6-aca4f23ea15e container secret-env-test: <nil>
    STEP: delete the pod
    Sep  5 14:31:24.078: INFO: Waiting for pod pod-secrets-27f56adc-60da-47ee-b8c6-aca4f23ea15e to disappear
    Sep  5 14:31:24.084: INFO: Pod pod-secrets-27f56adc-60da-47ee-b8c6-aca4f23ea15e no longer exists
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:31:24.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-7357" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    W0905 14:31:17.901532      21 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
    Sep  5 14:31:17.901: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  5 14:31:17.922: INFO: Waiting up to 5m0s for pod "downward-api-76bb2f6f-edd7-40e5-843a-efa37af47750" in namespace "downward-api-7768" to be "Succeeded or Failed"

    Sep  5 14:31:17.941: INFO: Pod "downward-api-76bb2f6f-edd7-40e5-843a-efa37af47750": Phase="Pending", Reason="", readiness=false. Elapsed: 18.043805ms
    Sep  5 14:31:20.029: INFO: Pod "downward-api-76bb2f6f-edd7-40e5-843a-efa37af47750": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106529154s
    Sep  5 14:31:22.033: INFO: Pod "downward-api-76bb2f6f-edd7-40e5-843a-efa37af47750": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110725373s
    Sep  5 14:31:24.039: INFO: Pod "downward-api-76bb2f6f-edd7-40e5-843a-efa37af47750": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.116476383s
    STEP: Saw pod success
    Sep  5 14:31:24.039: INFO: Pod "downward-api-76bb2f6f-edd7-40e5-843a-efa37af47750" satisfied condition "Succeeded or Failed"

    Sep  5 14:31:24.047: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod downward-api-76bb2f6f-edd7-40e5-843a-efa37af47750 container dapi-container: <nil>
    STEP: delete the pod
    Sep  5 14:31:24.086: INFO: Waiting for pod downward-api-76bb2f6f-edd7-40e5-843a-efa37af47750 to disappear
    Sep  5 14:31:24.091: INFO: Pod downward-api-76bb2f6f-edd7-40e5-843a-efa37af47750 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:31:24.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7768" for this suite.
    
    •S
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    W0905 14:31:17.888576      19 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
    Sep  5 14:31:17.888: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep  5 14:31:17.913: INFO: Waiting up to 5m0s for pod "pod-85caa24f-a806-4e5e-8bb7-a583f4d53e75" in namespace "emptydir-1474" to be "Succeeded or Failed"

    Sep  5 14:31:17.918: INFO: Pod "pod-85caa24f-a806-4e5e-8bb7-a583f4d53e75": Phase="Pending", Reason="", readiness=false. Elapsed: 5.371768ms
    Sep  5 14:31:20.028: INFO: Pod "pod-85caa24f-a806-4e5e-8bb7-a583f4d53e75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11484693s
    Sep  5 14:31:22.033: INFO: Pod "pod-85caa24f-a806-4e5e-8bb7-a583f4d53e75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120533067s
    Sep  5 14:31:24.039: INFO: Pod "pod-85caa24f-a806-4e5e-8bb7-a583f4d53e75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125906674s
    Sep  5 14:31:26.045: INFO: Pod "pod-85caa24f-a806-4e5e-8bb7-a583f4d53e75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.131790463s
    STEP: Saw pod success
    Sep  5 14:31:26.045: INFO: Pod "pod-85caa24f-a806-4e5e-8bb7-a583f4d53e75" satisfied condition "Succeeded or Failed"

    Sep  5 14:31:26.049: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd pod pod-85caa24f-a806-4e5e-8bb7-a583f4d53e75 container test-container: <nil>
    STEP: delete the pod
    Sep  5 14:31:26.074: INFO: Waiting for pod pod-85caa24f-a806-4e5e-8bb7-a583f4d53e75 to disappear
    Sep  5 14:31:26.078: INFO: Pod pod-85caa24f-a806-4e5e-8bb7-a583f4d53e75 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:31:26.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-1474" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 47 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:31:31.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-5294" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:31:26.156: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep  5 14:31:26.209: INFO: Waiting up to 5m0s for pod "pod-a85d1404-c713-463d-8c48-191bf24b200f" in namespace "emptydir-2586" to be "Succeeded or Failed"

    Sep  5 14:31:26.213: INFO: Pod "pod-a85d1404-c713-463d-8c48-191bf24b200f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.380949ms
    Sep  5 14:31:28.221: INFO: Pod "pod-a85d1404-c713-463d-8c48-191bf24b200f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012247106s
    Sep  5 14:31:30.228: INFO: Pod "pod-a85d1404-c713-463d-8c48-191bf24b200f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019496778s
    Sep  5 14:31:32.248: INFO: Pod "pod-a85d1404-c713-463d-8c48-191bf24b200f": Phase="Running", Reason="", readiness=true. Elapsed: 6.039418262s
    Sep  5 14:31:34.257: INFO: Pod "pod-a85d1404-c713-463d-8c48-191bf24b200f": Phase="Running", Reason="", readiness=false. Elapsed: 8.048493941s
    Sep  5 14:31:36.262: INFO: Pod "pod-a85d1404-c713-463d-8c48-191bf24b200f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053268946s
    STEP: Saw pod success
    Sep  5 14:31:36.262: INFO: Pod "pod-a85d1404-c713-463d-8c48-191bf24b200f" satisfied condition "Succeeded or Failed"

    Sep  5 14:31:36.267: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-9jgcx pod pod-a85d1404-c713-463d-8c48-191bf24b200f container test-container: <nil>
    STEP: delete the pod
    Sep  5 14:31:36.288: INFO: Waiting for pod pod-a85d1404-c713-463d-8c48-191bf24b200f to disappear
    Sep  5 14:31:36.292: INFO: Pod pod-a85d1404-c713-463d-8c48-191bf24b200f no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:31:36.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-2586" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":27,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:31:39.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-3527" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}

    
    SSS
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":24,"failed":0}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:31:26.618: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-6450-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":2,"skipped":24,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 51 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:31:43.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-7622" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":2,"skipped":43,"failed":0}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":3,"skipped":30,"failed":0}

    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:31:42.291: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 14:31:42.339: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c244ade6-4613-4943-9c19-a39dbbffd2de" in namespace "projected-2422" to be "Succeeded or Failed"

    Sep  5 14:31:42.350: INFO: Pod "downwardapi-volume-c244ade6-4613-4943-9c19-a39dbbffd2de": Phase="Pending", Reason="", readiness=false. Elapsed: 9.537199ms
    Sep  5 14:31:44.354: INFO: Pod "downwardapi-volume-c244ade6-4613-4943-9c19-a39dbbffd2de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014046997s
    Sep  5 14:31:46.360: INFO: Pod "downwardapi-volume-c244ade6-4613-4943-9c19-a39dbbffd2de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019993142s
    STEP: Saw pod success
    Sep  5 14:31:46.360: INFO: Pod "downwardapi-volume-c244ade6-4613-4943-9c19-a39dbbffd2de" satisfied condition "Succeeded or Failed"

    Sep  5 14:31:46.365: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-9oo03u pod downwardapi-volume-c244ade6-4613-4943-9c19-a39dbbffd2de container client-container: <nil>
    STEP: delete the pod
    Sep  5 14:31:46.399: INFO: Waiting for pod downwardapi-volume-c244ade6-4613-4943-9c19-a39dbbffd2de to disappear
    Sep  5 14:31:46.404: INFO: Pod downwardapi-volume-c244ade6-4613-4943-9c19-a39dbbffd2de no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:31:46.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2422" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":30,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:31:47.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-6043" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":61,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:31:50.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-5035" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":4,"skipped":87,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:31:50.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-8298" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
    STEP: Destroying namespace "webhook-2786-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":5,"skipped":97,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:31:50.618: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-runtime
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: create the container
    STEP: wait for the container to reach Failed

    STEP: get the container status
    STEP: the container should be terminated
    STEP: the termination message should be set
    Sep  5 14:31:53.713: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
    STEP: delete the container
    [AfterEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:31:53.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-7991" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":169,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:31:54.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-1957" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":5,"skipped":24,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:31:51.216: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-4f8d5fab-3544-4c43-9ac5-170a304a6e83
    STEP: Creating a pod to test consume secrets
    Sep  5 14:31:51.287: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6c2172da-8a3b-4569-8cd5-bf72842f08fd" in namespace "projected-5417" to be "Succeeded or Failed"

    Sep  5 14:31:51.301: INFO: Pod "pod-projected-secrets-6c2172da-8a3b-4569-8cd5-bf72842f08fd": Phase="Pending", Reason="", readiness=false. Elapsed: 13.047371ms
    Sep  5 14:31:53.304: INFO: Pod "pod-projected-secrets-6c2172da-8a3b-4569-8cd5-bf72842f08fd": Phase="Running", Reason="", readiness=false. Elapsed: 2.016688092s
    Sep  5 14:31:55.309: INFO: Pod "pod-projected-secrets-6c2172da-8a3b-4569-8cd5-bf72842f08fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021293036s
    STEP: Saw pod success
    Sep  5 14:31:55.309: INFO: Pod "pod-projected-secrets-6c2172da-8a3b-4569-8cd5-bf72842f08fd" satisfied condition "Succeeded or Failed"

    Sep  5 14:31:55.312: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-9oo03u pod pod-projected-secrets-6c2172da-8a3b-4569-8cd5-bf72842f08fd container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 14:31:55.337: INFO: Waiting for pod pod-projected-secrets-6c2172da-8a3b-4569-8cd5-bf72842f08fd to disappear
    Sep  5 14:31:55.341: INFO: Pod pod-projected-secrets-6c2172da-8a3b-4569-8cd5-bf72842f08fd no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:31:55.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5417" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":134,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:31:53.767: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-3e17da10-f110-403e-b9f2-fb4a84258ebf
    STEP: Creating a pod to test consume secrets
    Sep  5 14:31:53.871: INFO: Waiting up to 5m0s for pod "pod-secrets-eced038c-e89a-42a3-83fc-cf0880e4290c" in namespace "secrets-191" to be "Succeeded or Failed"

    Sep  5 14:31:53.876: INFO: Pod "pod-secrets-eced038c-e89a-42a3-83fc-cf0880e4290c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.551735ms
    Sep  5 14:31:55.882: INFO: Pod "pod-secrets-eced038c-e89a-42a3-83fc-cf0880e4290c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010981389s
    Sep  5 14:31:57.888: INFO: Pod "pod-secrets-eced038c-e89a-42a3-83fc-cf0880e4290c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017281536s
    STEP: Saw pod success
    Sep  5 14:31:57.888: INFO: Pod "pod-secrets-eced038c-e89a-42a3-83fc-cf0880e4290c" satisfied condition "Succeeded or Failed"

    Sep  5 14:31:57.896: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd pod pod-secrets-eced038c-e89a-42a3-83fc-cf0880e4290c container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 14:31:57.918: INFO: Waiting for pod pod-secrets-eced038c-e89a-42a3-83fc-cf0880e4290c to disappear
    Sep  5 14:31:57.924: INFO: Pod pod-secrets-eced038c-e89a-42a3-83fc-cf0880e4290c no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 11 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 14:31:55.027: INFO: Waiting up to 5m0s for pod "downwardapi-volume-147ca61b-a058-4d00-9e86-3bc89bf9ec9e" in namespace "projected-213" to be "Succeeded or Failed"

    Sep  5 14:31:55.030: INFO: Pod "downwardapi-volume-147ca61b-a058-4d00-9e86-3bc89bf9ec9e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.532312ms
    Sep  5 14:31:57.036: INFO: Pod "downwardapi-volume-147ca61b-a058-4d00-9e86-3bc89bf9ec9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009097378s
    Sep  5 14:31:59.041: INFO: Pod "downwardapi-volume-147ca61b-a058-4d00-9e86-3bc89bf9ec9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014652302s
    STEP: Saw pod success
    Sep  5 14:31:59.042: INFO: Pod "downwardapi-volume-147ca61b-a058-4d00-9e86-3bc89bf9ec9e" satisfied condition "Succeeded or Failed"

    Sep  5 14:31:59.046: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-9oo03u pod downwardapi-volume-147ca61b-a058-4d00-9e86-3bc89bf9ec9e container client-container: <nil>
    STEP: delete the pod
    Sep  5 14:31:59.079: INFO: Waiting for pod downwardapi-volume-147ca61b-a058-4d00-9e86-3bc89bf9ec9e to disappear
    Sep  5 14:31:59.084: INFO: Pod downwardapi-volume-147ca61b-a058-4d00-9e86-3bc89bf9ec9e no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:31:59.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-213" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":104,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 14:31:55.467: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f69610bd-a250-468c-befd-4145b77803da" in namespace "downward-api-6823" to be "Succeeded or Failed"

    Sep  5 14:31:55.471: INFO: Pod "downwardapi-volume-f69610bd-a250-468c-befd-4145b77803da": Phase="Pending", Reason="", readiness=false. Elapsed: 3.965266ms
    Sep  5 14:31:57.475: INFO: Pod "downwardapi-volume-f69610bd-a250-468c-befd-4145b77803da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008265601s
    Sep  5 14:31:59.482: INFO: Pod "downwardapi-volume-f69610bd-a250-468c-befd-4145b77803da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015413441s
    STEP: Saw pod success
    Sep  5 14:31:59.482: INFO: Pod "downwardapi-volume-f69610bd-a250-468c-befd-4145b77803da" satisfied condition "Succeeded or Failed"

    Sep  5 14:31:59.487: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd pod downwardapi-volume-f69610bd-a250-468c-befd-4145b77803da container client-container: <nil>
    STEP: delete the pod
    Sep  5 14:31:59.519: INFO: Waiting for pod downwardapi-volume-f69610bd-a250-468c-befd-4145b77803da to disappear
    Sep  5 14:31:59.524: INFO: Pod downwardapi-volume-f69610bd-a250-468c-befd-4145b77803da no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:31:59.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-6823" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":153,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:31:59.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-3061" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":8,"skipped":175,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-configmap-dhnw
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  5 14:31:39.580: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dhnw" in namespace "subpath-7293" to be "Succeeded or Failed"

    Sep  5 14:31:39.586: INFO: Pod "pod-subpath-test-configmap-dhnw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.298524ms
    Sep  5 14:31:41.591: INFO: Pod "pod-subpath-test-configmap-dhnw": Phase="Running", Reason="", readiness=true. Elapsed: 2.011206453s
    Sep  5 14:31:43.596: INFO: Pod "pod-subpath-test-configmap-dhnw": Phase="Running", Reason="", readiness=true. Elapsed: 4.016500691s
    Sep  5 14:31:45.601: INFO: Pod "pod-subpath-test-configmap-dhnw": Phase="Running", Reason="", readiness=true. Elapsed: 6.021877004s
    Sep  5 14:31:47.608: INFO: Pod "pod-subpath-test-configmap-dhnw": Phase="Running", Reason="", readiness=true. Elapsed: 8.028465006s
    Sep  5 14:31:49.613: INFO: Pod "pod-subpath-test-configmap-dhnw": Phase="Running", Reason="", readiness=true. Elapsed: 10.033661453s
... skipping 2 lines ...
    Sep  5 14:31:55.628: INFO: Pod "pod-subpath-test-configmap-dhnw": Phase="Running", Reason="", readiness=true. Elapsed: 16.048748307s
    Sep  5 14:31:57.634: INFO: Pod "pod-subpath-test-configmap-dhnw": Phase="Running", Reason="", readiness=true. Elapsed: 18.054227049s
    Sep  5 14:31:59.642: INFO: Pod "pod-subpath-test-configmap-dhnw": Phase="Running", Reason="", readiness=true. Elapsed: 20.062042956s
    Sep  5 14:32:01.647: INFO: Pod "pod-subpath-test-configmap-dhnw": Phase="Running", Reason="", readiness=false. Elapsed: 22.067204171s
    Sep  5 14:32:03.656: INFO: Pod "pod-subpath-test-configmap-dhnw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.076074258s
    STEP: Saw pod success
    Sep  5 14:32:03.656: INFO: Pod "pod-subpath-test-configmap-dhnw" satisfied condition "Succeeded or Failed"

    Sep  5 14:32:03.663: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-9jgcx pod pod-subpath-test-configmap-dhnw container test-container-subpath-configmap-dhnw: <nil>
    STEP: delete the pod
    Sep  5 14:32:03.719: INFO: Waiting for pod pod-subpath-test-configmap-dhnw to disappear
    Sep  5 14:32:03.724: INFO: Pod pod-subpath-test-configmap-dhnw no longer exists
    STEP: Deleting pod pod-subpath-test-configmap-dhnw
    Sep  5 14:32:03.724: INFO: Deleting pod "pod-subpath-test-configmap-dhnw" in namespace "subpath-7293"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:03.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-7293" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":28,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:31:59.114: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-d3339e24-b067-478d-be67-7e1b66af96ed
    STEP: Creating a pod to test consume configMaps
    Sep  5 14:31:59.180: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d98d8d97-a987-47d8-a1df-13d4520ae03f" in namespace "projected-7636" to be "Succeeded or Failed"

    Sep  5 14:31:59.184: INFO: Pod "pod-projected-configmaps-d98d8d97-a987-47d8-a1df-13d4520ae03f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.873003ms
    Sep  5 14:32:01.191: INFO: Pod "pod-projected-configmaps-d98d8d97-a987-47d8-a1df-13d4520ae03f": Phase="Running", Reason="", readiness=true. Elapsed: 2.010467488s
    Sep  5 14:32:03.209: INFO: Pod "pod-projected-configmaps-d98d8d97-a987-47d8-a1df-13d4520ae03f": Phase="Running", Reason="", readiness=false. Elapsed: 4.028447454s
    Sep  5 14:32:05.216: INFO: Pod "pod-projected-configmaps-d98d8d97-a987-47d8-a1df-13d4520ae03f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035613876s
    STEP: Saw pod success
    Sep  5 14:32:05.216: INFO: Pod "pod-projected-configmaps-d98d8d97-a987-47d8-a1df-13d4520ae03f" satisfied condition "Succeeded or Failed"

    Sep  5 14:32:05.223: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-9oo03u pod pod-projected-configmaps-d98d8d97-a987-47d8-a1df-13d4520ae03f container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 14:32:05.244: INFO: Waiting for pod pod-projected-configmaps-d98d8d97-a987-47d8-a1df-13d4520ae03f to disappear
    Sep  5 14:32:05.254: INFO: Pod pod-projected-configmaps-d98d8d97-a987-47d8-a1df-13d4520ae03f no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:05.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7636" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":108,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:32:03.821: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test env composition
    Sep  5 14:32:03.890: INFO: Waiting up to 5m0s for pod "var-expansion-168afa42-3866-4c69-ab7c-14e4ff5450c0" in namespace "var-expansion-3233" to be "Succeeded or Failed"

    Sep  5 14:32:03.895: INFO: Pod "var-expansion-168afa42-3866-4c69-ab7c-14e4ff5450c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.964283ms
    Sep  5 14:32:05.904: INFO: Pod "var-expansion-168afa42-3866-4c69-ab7c-14e4ff5450c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013823429s
    Sep  5 14:32:07.910: INFO: Pod "var-expansion-168afa42-3866-4c69-ab7c-14e4ff5450c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020228317s
    STEP: Saw pod success
    Sep  5 14:32:07.910: INFO: Pod "var-expansion-168afa42-3866-4c69-ab7c-14e4ff5450c0" satisfied condition "Succeeded or Failed"

    Sep  5 14:32:07.914: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod var-expansion-168afa42-3866-4c69-ab7c-14e4ff5450c0 container dapi-container: <nil>
    STEP: delete the pod
    Sep  5 14:32:07.950: INFO: Waiting for pod var-expansion-168afa42-3866-4c69-ab7c-14e4ff5450c0 to disappear
    Sep  5 14:32:07.956: INFO: Pod var-expansion-168afa42-3866-4c69-ab7c-14e4ff5450c0 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:07.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-3233" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":51,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 41 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:10.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-3297" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":9,"skipped":177,"failed":0}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
    STEP: updating the pod
    Sep  5 14:32:07.936: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7fcb2ba4-0ff0-48eb-951b-b398bd88a394"
    Sep  5 14:32:07.936: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7fcb2ba4-0ff0-48eb-951b-b398bd88a394" in namespace "pods-7722" to be "terminated due to deadline exceeded"
    Sep  5 14:32:07.944: INFO: Pod "pod-update-activedeadlineseconds-7fcb2ba4-0ff0-48eb-951b-b398bd88a394": Phase="Running", Reason="", readiness=true. Elapsed: 7.750027ms
    Sep  5 14:32:09.951: INFO: Pod "pod-update-activedeadlineseconds-7fcb2ba4-0ff0-48eb-951b-b398bd88a394": Phase="Running", Reason="", readiness=true. Elapsed: 2.014709581s
    Sep  5 14:32:11.958: INFO: Pod "pod-update-activedeadlineseconds-7fcb2ba4-0ff0-48eb-951b-b398bd88a394": Phase="Running", Reason="", readiness=false. Elapsed: 4.0216437s
    Sep  5 14:32:13.964: INFO: Pod "pod-update-activedeadlineseconds-7fcb2ba4-0ff0-48eb-951b-b398bd88a394": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 6.027695653s

    Sep  5 14:32:13.964: INFO: Pod "pod-update-activedeadlineseconds-7fcb2ba4-0ff0-48eb-951b-b398bd88a394" satisfied condition "terminated due to deadline exceeded"
    [AfterEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:13.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-7722" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":121,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSliceMirroring
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:14.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslicemirroring-920" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":5,"skipped":82,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:15.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "certificates-2476" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":6,"skipped":85,"failed":0}

    
    S
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":180,"failed":0}

    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:31:57.948: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename statefulset
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
    STEP: Looking for a node to schedule stateful set and pod
    STEP: Creating pod with conflicting port in namespace statefulset-2440
    STEP: Waiting until pod test-pod will start running in namespace statefulset-2440
    STEP: Creating statefulset with conflicting port in namespace statefulset-2440
    STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2440
    Sep  5 14:32:06.087: INFO: Observed stateful pod in namespace: statefulset-2440, name: ss-0, uid: a9a865ac-da25-484e-86e8-fa36f783245b, status phase: Pending. Waiting for statefulset controller to delete.
    Sep  5 14:32:06.117: INFO: Observed stateful pod in namespace: statefulset-2440, name: ss-0, uid: a9a865ac-da25-484e-86e8-fa36f783245b, status phase: Failed. Waiting for statefulset controller to delete.

    Sep  5 14:32:06.140: INFO: Observed stateful pod in namespace: statefulset-2440, name: ss-0, uid: a9a865ac-da25-484e-86e8-fa36f783245b, status phase: Failed. Waiting for statefulset controller to delete.

    Sep  5 14:32:06.146: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2440
    STEP: Removing pod with conflicting port in namespace statefulset-2440
    STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2440 and will be in running state
    [AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118
    Sep  5 14:32:08.203: INFO: Deleting all statefulset in ns statefulset-2440
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:18.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-2440" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":7,"skipped":180,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "webhook-1951-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":7,"skipped":86,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:32:13.998: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-92ebb3c7-7907-473f-8c4c-78b12e67c1a9
    STEP: Creating a pod to test consume secrets
    Sep  5 14:32:14.049: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9b9e6ecc-2d83-4479-a8f9-95b06674d6cf" in namespace "projected-338" to be "Succeeded or Failed"

    Sep  5 14:32:14.054: INFO: Pod "pod-projected-secrets-9b9e6ecc-2d83-4479-a8f9-95b06674d6cf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.136649ms
    Sep  5 14:32:16.063: INFO: Pod "pod-projected-secrets-9b9e6ecc-2d83-4479-a8f9-95b06674d6cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014425847s
    Sep  5 14:32:18.070: INFO: Pod "pod-projected-secrets-9b9e6ecc-2d83-4479-a8f9-95b06674d6cf": Phase="Running", Reason="", readiness=true. Elapsed: 4.021020345s
    Sep  5 14:32:20.075: INFO: Pod "pod-projected-secrets-9b9e6ecc-2d83-4479-a8f9-95b06674d6cf": Phase="Running", Reason="", readiness=false. Elapsed: 6.025771541s
    Sep  5 14:32:22.086: INFO: Pod "pod-projected-secrets-9b9e6ecc-2d83-4479-a8f9-95b06674d6cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037130659s
    STEP: Saw pod success
    Sep  5 14:32:22.086: INFO: Pod "pod-projected-secrets-9b9e6ecc-2d83-4479-a8f9-95b06674d6cf" satisfied condition "Succeeded or Failed"

    Sep  5 14:32:22.093: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod pod-projected-secrets-9b9e6ecc-2d83-4479-a8f9-95b06674d6cf container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 14:32:22.139: INFO: Waiting for pod pod-projected-secrets-9b9e6ecc-2d83-4479-a8f9-95b06674d6cf to disappear
    Sep  5 14:32:22.155: INFO: Pod pod-projected-secrets-9b9e6ecc-2d83-4479-a8f9-95b06674d6cf no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:22.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-338" for this suite.
    
    •S
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":129,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:22.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-9440" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":8,"skipped":174,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    Sep  5 14:32:22.261: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should mount projected service account token [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test service account token: 
    Sep  5 14:32:22.356: INFO: Waiting up to 5m0s for pod "test-pod-e72ada35-b349-4659-9f61-b8e577c50d7a" in namespace "svcaccounts-4776" to be "Succeeded or Failed"

    Sep  5 14:32:22.365: INFO: Pod "test-pod-e72ada35-b349-4659-9f61-b8e577c50d7a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.370426ms
    Sep  5 14:32:24.380: INFO: Pod "test-pod-e72ada35-b349-4659-9f61-b8e577c50d7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024013156s
    Sep  5 14:32:26.394: INFO: Pod "test-pod-e72ada35-b349-4659-9f61-b8e577c50d7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038120013s
    STEP: Saw pod success
    Sep  5 14:32:26.395: INFO: Pod "test-pod-e72ada35-b349-4659-9f61-b8e577c50d7a" satisfied condition "Succeeded or Failed"

    Sep  5 14:32:26.406: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod test-pod-e72ada35-b349-4659-9f61-b8e577c50d7a container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 14:32:26.447: INFO: Waiting for pod test-pod-e72ada35-b349-4659-9f61-b8e577c50d7a to disappear
    Sep  5 14:32:26.453: INFO: Pod test-pod-e72ada35-b349-4659-9f61-b8e577c50d7a no longer exists
    [AfterEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:26.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-4776" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":10,"skipped":143,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:28.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-2252" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":11,"skipped":170,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":196,"failed":0}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:32:22.537: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 188 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:31.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-5037" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":9,"skipped":196,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:31.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-6430" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":9,"skipped":188,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:32:28.947: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override all
    Sep  5 14:32:29.002: INFO: Waiting up to 5m0s for pod "client-containers-e035774d-7c42-4067-83b6-27ef6764c46a" in namespace "containers-7963" to be "Succeeded or Failed"

    Sep  5 14:32:29.007: INFO: Pod "client-containers-e035774d-7c42-4067-83b6-27ef6764c46a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.874982ms
    Sep  5 14:32:31.012: INFO: Pod "client-containers-e035774d-7c42-4067-83b6-27ef6764c46a": Phase="Running", Reason="", readiness=false. Elapsed: 2.009731276s
    Sep  5 14:32:33.020: INFO: Pod "client-containers-e035774d-7c42-4067-83b6-27ef6764c46a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018162103s
    STEP: Saw pod success
    Sep  5 14:32:33.020: INFO: Pod "client-containers-e035774d-7c42-4067-83b6-27ef6764c46a" satisfied condition "Succeeded or Failed"

    Sep  5 14:32:33.030: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd pod client-containers-e035774d-7c42-4067-83b6-27ef6764c46a container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 14:32:33.055: INFO: Waiting for pod client-containers-e035774d-7c42-4067-83b6-27ef6764c46a to disappear
    Sep  5 14:32:33.063: INFO: Pod client-containers-e035774d-7c42-4067-83b6-27ef6764c46a no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:33.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-7963" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":221,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:33.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-7214" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":13,"skipped":257,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "services-3476" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":10,"skipped":218,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:39.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-883" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":11,"skipped":225,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:43.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-3584" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":14,"skipped":258,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 31 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:43.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-885" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":12,"skipped":255,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:44.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-729" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":10,"skipped":209,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:45.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-6693" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":260,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:32:43.950: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on tmpfs
    Sep  5 14:32:44.010: INFO: Waiting up to 5m0s for pod "pod-b45934d6-1c20-4962-a163-249a9d90d495" in namespace "emptydir-4146" to be "Succeeded or Failed"

    Sep  5 14:32:44.015: INFO: Pod "pod-b45934d6-1c20-4962-a163-249a9d90d495": Phase="Pending", Reason="", readiness=false. Elapsed: 4.295867ms
    Sep  5 14:32:46.025: INFO: Pod "pod-b45934d6-1c20-4962-a163-249a9d90d495": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015011245s
    Sep  5 14:32:48.032: INFO: Pod "pod-b45934d6-1c20-4962-a163-249a9d90d495": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021233787s
    STEP: Saw pod success
    Sep  5 14:32:48.032: INFO: Pod "pod-b45934d6-1c20-4962-a163-249a9d90d495" satisfied condition "Succeeded or Failed"

    Sep  5 14:32:48.036: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-9oo03u pod pod-b45934d6-1c20-4962-a163-249a9d90d495 container test-container: <nil>
    STEP: delete the pod
    Sep  5 14:32:48.062: INFO: Waiting for pod pod-b45934d6-1c20-4962-a163-249a9d90d495 to disappear
    Sep  5 14:32:48.066: INFO: Pod pod-b45934d6-1c20-4962-a163-249a9d90d495 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:48.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-4146" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":260,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
... skipping 4 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65
    [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
    STEP: Watching for error events or started pod

    STEP: Waiting for pod completion
    STEP: Checking that the pod succeeded
    STEP: Getting logs from the pod
    STEP: Checking that the sysctl is actually updated
    [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:49.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-3928" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":16,"skipped":264,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:32:54.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-9814" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":11,"skipped":224,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:33:01.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-9787" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":17,"skipped":368,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-network] IngressClass API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:33:01.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "ingressclass-7683" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":18,"skipped":373,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:33:04.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-6549" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":14,"skipped":268,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
    STEP: Registering the webhook via the AdmissionRegistration API
    Sep  5 14:32:24.384: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:32:34.498: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:32:44.605: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:32:54.737: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:33:04.776: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:33:04.777: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000248290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should be able to deny pod and configmap creation [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 14:33:04.777: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000248290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:33:06.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-2278" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":488,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:33:06.149: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-5611edd9-6582-4015-96e9-3d655d3080d1
    STEP: Creating a pod to test consume configMaps
    Sep  5 14:33:06.261: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c1783a90-f739-4c3e-9065-bcc750885584" in namespace "projected-7127" to be "Succeeded or Failed"

    Sep  5 14:33:06.266: INFO: Pod "pod-projected-configmaps-c1783a90-f739-4c3e-9065-bcc750885584": Phase="Pending", Reason="", readiness=false. Elapsed: 5.22744ms
    Sep  5 14:33:08.287: INFO: Pod "pod-projected-configmaps-c1783a90-f739-4c3e-9065-bcc750885584": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025800634s
    Sep  5 14:33:10.294: INFO: Pod "pod-projected-configmaps-c1783a90-f739-4c3e-9065-bcc750885584": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033069515s
    STEP: Saw pod success
    Sep  5 14:33:10.294: INFO: Pod "pod-projected-configmaps-c1783a90-f739-4c3e-9065-bcc750885584" satisfied condition "Succeeded or Failed"

    Sep  5 14:33:10.301: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-9jgcx pod pod-projected-configmaps-c1783a90-f739-4c3e-9065-bcc750885584 container projected-configmap-volume-test: <nil>
    STEP: delete the pod
    Sep  5 14:33:10.337: INFO: Waiting for pod pod-projected-configmaps-c1783a90-f739-4c3e-9065-bcc750885584 to disappear
    Sep  5 14:33:10.347: INFO: Pod pod-projected-configmaps-c1783a90-f739-4c3e-9065-bcc750885584 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:33:10.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7127" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":510,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:33:04.505: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  5 14:33:04.605: INFO: Waiting up to 5m0s for pod "downward-api-60c0b050-db32-46a5-92e6-0db5b958cf48" in namespace "downward-api-7684" to be "Succeeded or Failed"

    Sep  5 14:33:04.614: INFO: Pod "downward-api-60c0b050-db32-46a5-92e6-0db5b958cf48": Phase="Pending", Reason="", readiness=false. Elapsed: 9.281007ms
    Sep  5 14:33:06.625: INFO: Pod "downward-api-60c0b050-db32-46a5-92e6-0db5b958cf48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019919465s
    Sep  5 14:33:08.634: INFO: Pod "downward-api-60c0b050-db32-46a5-92e6-0db5b958cf48": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029656736s
    Sep  5 14:33:10.658: INFO: Pod "downward-api-60c0b050-db32-46a5-92e6-0db5b958cf48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053591957s
    STEP: Saw pod success
    Sep  5 14:33:10.658: INFO: Pod "downward-api-60c0b050-db32-46a5-92e6-0db5b958cf48" satisfied condition "Succeeded or Failed"

    Sep  5 14:33:10.665: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod downward-api-60c0b050-db32-46a5-92e6-0db5b958cf48 container dapi-container: <nil>
    STEP: delete the pod
    Sep  5 14:33:10.732: INFO: Waiting for pod downward-api-60c0b050-db32-46a5-92e6-0db5b958cf48 to disappear
    Sep  5 14:33:10.738: INFO: Pod downward-api-60c0b050-db32-46a5-92e6-0db5b958cf48 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:33:10.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7684" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":295,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:33:10.465: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep  5 14:33:10.598: INFO: Waiting up to 5m0s for pod "pod-8d9fddb4-9d30-41ef-9916-db0938ffcfae" in namespace "emptydir-5719" to be "Succeeded or Failed"

    Sep  5 14:33:10.640: INFO: Pod "pod-8d9fddb4-9d30-41ef-9916-db0938ffcfae": Phase="Pending", Reason="", readiness=false. Elapsed: 42.607918ms
    Sep  5 14:33:12.648: INFO: Pod "pod-8d9fddb4-9d30-41ef-9916-db0938ffcfae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050305058s
    Sep  5 14:33:14.660: INFO: Pod "pod-8d9fddb4-9d30-41ef-9916-db0938ffcfae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062519817s
    STEP: Saw pod success
    Sep  5 14:33:14.660: INFO: Pod "pod-8d9fddb4-9d30-41ef-9916-db0938ffcfae" satisfied condition "Succeeded or Failed"

    Sep  5 14:33:14.668: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod pod-8d9fddb4-9d30-41ef-9916-db0938ffcfae container test-container: <nil>
    STEP: delete the pod
    Sep  5 14:33:14.700: INFO: Waiting for pod pod-8d9fddb4-9d30-41ef-9916-db0938ffcfae to disappear
    Sep  5 14:33:14.707: INFO: Pod pod-8d9fddb4-9d30-41ef-9916-db0938ffcfae no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:33:14.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-5719" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":531,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:33:26.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-6819" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":22,"skipped":557,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:33:30.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-4091" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":12,"skipped":234,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:33:32.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-4386" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":302,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 52 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:33:33.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-9742" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":23,"skipped":601,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Service endpoints latency
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 419 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:33:42.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svc-latency-78" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":13,"skipped":266,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:33:49.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-8548" for this suite.
    
    •
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":9,"skipped":195,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:33:04.966: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
    STEP: Registering the webhook via the AdmissionRegistration API
    Sep  5 14:33:19.005: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:33:29.129: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:33:39.494: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:33:49.591: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:33:59.625: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:33:59.626: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000248290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should be able to deny pod and configmap creation [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 14:33:59.626: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000248290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 52 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:34:04.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-7403" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":17,"skipped":307,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:34:04.825: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-aa979811-a7d3-4844-b5b2-e318fc758d82
    STEP: Creating a pod to test consume configMaps
    Sep  5 14:34:04.949: INFO: Waiting up to 5m0s for pod "pod-configmaps-7c8f00bf-a030-4cb5-90e6-530402de204d" in namespace "configmap-8274" to be "Succeeded or Failed"

    Sep  5 14:34:04.957: INFO: Pod "pod-configmaps-7c8f00bf-a030-4cb5-90e6-530402de204d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.62404ms
    Sep  5 14:34:06.967: INFO: Pod "pod-configmaps-7c8f00bf-a030-4cb5-90e6-530402de204d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017623166s
    Sep  5 14:34:08.977: INFO: Pod "pod-configmaps-7c8f00bf-a030-4cb5-90e6-530402de204d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027798424s
    STEP: Saw pod success
    Sep  5 14:34:08.977: INFO: Pod "pod-configmaps-7c8f00bf-a030-4cb5-90e6-530402de204d" satisfied condition "Succeeded or Failed"

    Sep  5 14:34:08.983: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd pod pod-configmaps-7c8f00bf-a030-4cb5-90e6-530402de204d container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 14:34:09.036: INFO: Waiting for pod pod-configmaps-7c8f00bf-a030-4cb5-90e6-530402de204d to disappear
    Sep  5 14:34:09.042: INFO: Pod pod-configmaps-7c8f00bf-a030-4cb5-90e6-530402de204d no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:34:09.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-8274" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":313,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:34:11.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-1871" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":19,"skipped":338,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:34:40.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4785" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":613,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:34:42.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-8447" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":620,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:34:11.635: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  5 14:34:11.725: INFO: created pod
    Sep  5 14:34:11.725: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-322" to be "Succeeded or Failed"

    Sep  5 14:34:11.730: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.982992ms
    Sep  5 14:34:13.738: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013012005s
    Sep  5 14:34:15.746: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020963085s
    STEP: Saw pod success
    Sep  5 14:34:15.746: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"

    Sep  5 14:34:45.747: INFO: polling logs
    Sep  5 14:34:45.768: INFO: Pod logs: 
    I0905 14:34:12.774046       1 log.go:195] OK: Got token
    I0905 14:34:12.774204       1 log.go:195] validating with in-cluster discovery
    I0905 14:34:12.775600       1 log.go:195] OK: got issuer https://kubernetes.default.svc.cluster.local
    I0905 14:34:12.775658       1 log.go:195] Full, not-validated claims: 
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:34:45.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-322" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":20,"skipped":346,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:34:46.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7345" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":21,"skipped":388,"failed":0}

    
    S
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":9,"skipped":195,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:33:59.776: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
    STEP: Registering the webhook via the AdmissionRegistration API
    Sep  5 14:34:15.684: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:34:25.822: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:34:35.918: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:34:46.012: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:34:56.033: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:34:56.034: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000248290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should be able to deny pod and configmap creation [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 14:34:56.034: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000248290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:909
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":9,"skipped":195,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 32 lines ...
    
    Sep  5 14:34:59.034: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment":
    &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88  deployment-7285  0ef9f8fc-59a7-4550-a669-e126c06dc5ee 7344 3 2022-09-05 14:34:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 0484397d-7b67-4c03-a4d5-f24e682dca51 0xc004402437 0xc004402438}] []  [{kube-controller-manager Update apps/v1 2022-09-05 14:34:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0484397d-7b67-4c03-a4d5-f24e682dca51\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-05 14:34:56 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0044024d8 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
    Sep  5 14:34:59.034: INFO: All old ReplicaSets of Deployment "webserver-deployment":
    Sep  5 14:34:59.035: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb  deployment-7285  68a7760d-df6c-4cb8-befb-13e2675b0b75 7342 3 2022-09-05 14:34:46 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 0484397d-7b67-4c03-a4d5-f24e682dca51 0xc004402537 0xc004402538}] []  [{kube-controller-manager Update apps/v1 2022-09-05 14:34:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0484397d-7b67-4c03-a4d5-f24e682dca51\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-05 14:34:48 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [] []  []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0044025c8 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
    Sep  5 14:34:59.092: INFO: Pod "webserver-deployment-795d758f88-6t7sr" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-6t7sr webserver-deployment-795d758f88- deployment-7285  e2bb58dc-0f3d-4115-a511-cac38298f522 7335 0 2022-09-05 14:34:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0ef9f8fc-59a7-4550-a669-e126c06dc5ee 0xc00070d2d0 0xc00070d2d1}] []  [{kube-controller-manager Update v1 2022-09-05 14:34:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ef9f8fc-59a7-4550-a669-e126c06dc5ee\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-05 14:34:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.29\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t9f5b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t9f5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.29,StartTime:2022-09-05 14:34:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.29,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  5 14:34:59.092: INFO: Pod "webserver-deployment-795d758f88-8rwvc" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-8rwvc webserver-deployment-795d758f88- deployment-7285  3714994f-846f-46f1-8e75-1dfd33435ecc 7365 0 2022-09-05 14:34:59 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0ef9f8fc-59a7-4550-a669-e126c06dc5ee 0xc00070d6d0 0xc00070d6d1}] []  [{kube-controller-manager Update v1 2022-09-05 14:34:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ef9f8fc-59a7-4550-a669-e126c06dc5ee\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-m8j6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m8j6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  5 14:34:59.092: INFO: Pod "webserver-deployment-795d758f88-b8ltc" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-b8ltc webserver-deployment-795d758f88- deployment-7285  454e451f-4112-4e71-92cf-be5f8331c2a1 7364 0 2022-09-05 14:34:59 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0ef9f8fc-59a7-4550-a669-e126c06dc5ee 0xc00070d9c7 0xc00070d9c8}] []  [{kube-controller-manager Update v1 2022-09-05 14:34:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ef9f8fc-59a7-4550-a669-e126c06dc5ee\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lpvxq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lpvxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-s5wz98-worker-9oo03u,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  5 14:34:59.092: INFO: Pod "webserver-deployment-795d758f88-bw65p" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-bw65p webserver-deployment-795d758f88- deployment-7285  1673c474-f12d-4e13-a99e-3ccccf860fe0 7369 0 2022-09-05 14:34:59 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0ef9f8fc-59a7-4550-a669-e126c06dc5ee 0xc00070db70 0xc00070db71}] []  [{kube-controller-manager Update v1 2022-09-05 14:34:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ef9f8fc-59a7-4550-a669-e126c06dc5ee\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-htpqf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-htpqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  5 14:34:59.093: INFO: Pod "webserver-deployment-795d758f88-g6pf8" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-g6pf8 webserver-deployment-795d758f88- deployment-7285  11b13213-1c15-4fb7-bcff-f8fc763425e8 7366 0 2022-09-05 14:34:59 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0ef9f8fc-59a7-4550-a669-e126c06dc5ee 0xc00070dd67 0xc00070dd68}] []  [{kube-controller-manager Update v1 2022-09-05 14:34:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ef9f8fc-59a7-4550-a669-e126c06dc5ee\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-97npq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-97npq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  5 14:34:59.093: INFO: Pod "webserver-deployment-795d758f88-gmmnq" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-gmmnq webserver-deployment-795d758f88- deployment-7285  8e8f11e7-f3fc-4bff-a793-22c59d0fbd77 7274 0 2022-09-05 14:34:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0ef9f8fc-59a7-4550-a669-e126c06dc5ee 0xc0049ca070 0xc0049ca071}] []  [{kube-controller-manager Update v1 2022-09-05 14:34:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ef9f8fc-59a7-4550-a669-e126c06dc5ee\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-05 14:34:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7tpbm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7tpbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2022-09-05 14:34:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  5 14:34:59.094: INFO: Pod "webserver-deployment-795d758f88-mbvw7" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-mbvw7 webserver-deployment-795d758f88- deployment-7285  91d956e1-8657-4504-abf1-0c9f9d2d108e 7368 0 2022-09-05 14:34:59 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0ef9f8fc-59a7-4550-a669-e126c06dc5ee 0xc0049ca240 0xc0049ca241}] []  [{kube-controller-manager Update v1 2022-09-05 14:34:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ef9f8fc-59a7-4550-a669-e126c06dc5ee\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fc9nc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fc9nc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  5 14:34:59.099: INFO: Pod "webserver-deployment-795d758f88-pvpkq" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-pvpkq webserver-deployment-795d758f88- deployment-7285  ce96712c-c338-4b05-9334-93da1febe6bf 7336 0 2022-09-05 14:34:57 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0ef9f8fc-59a7-4550-a669-e126c06dc5ee 0xc0049ca387 0xc0049ca388}] []  [{kube-controller-manager Update v1 2022-09-05 14:34:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ef9f8fc-59a7-4550-a669-e126c06dc5ee\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-05 14:34:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.28\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t9p6r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t9p6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.28,StartTime:2022-09-05 14:34:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.28,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  5 14:34:59.099: INFO: Pod "webserver-deployment-795d758f88-q6mw8" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-q6mw8 webserver-deployment-795d758f88- deployment-7285  7c51e337-0abb-4093-b2ca-969ad6001e10 7343 0 2022-09-05 14:34:57 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0ef9f8fc-59a7-4550-a669-e126c06dc5ee 0xc0049ca590 0xc0049ca591}] []  [{kube-controller-manager Update v1 2022-09-05 14:34:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ef9f8fc-59a7-4550-a669-e126c06dc5ee\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-05 14:34:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.23\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rbkvn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rbkvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-9jgcx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.23,StartTime:2022-09-05 14:34:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.23,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  5 14:34:59.100: INFO: Pod "webserver-deployment-795d758f88-srr9k" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-srr9k webserver-deployment-795d758f88- deployment-7285  7e60a3aa-de9e-4825-9b59-9e34f62e86d3 7370 0 2022-09-05 14:34:59 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0ef9f8fc-59a7-4550-a669-e126c06dc5ee 0xc0049ca7a0 0xc0049ca7a1}] []  [{kube-controller-manager Update v1 2022-09-05 14:34:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ef9f8fc-59a7-4550-a669-e126c06dc5ee\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gcmqz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gcmqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  5 14:34:59.100: INFO: Pod "webserver-deployment-795d758f88-tt49b" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-tt49b webserver-deployment-795d758f88- deployment-7285  d6ae790f-6754-4ac6-a042-86138e52db0e 7362 0 2022-09-05 14:34:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0ef9f8fc-59a7-4550-a669-e126c06dc5ee 0xc0049ca8e7 0xc0049ca8e8}] []  [{kube-controller-manager Update v1 2022-09-05 14:34:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ef9f8fc-59a7-4550-a669-e126c06dc5ee\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-97kvp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-97kvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-9jgcx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  5 14:34:59.101: INFO: Pod "webserver-deployment-795d758f88-xh95l" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-xh95l webserver-deployment-795d758f88- deployment-7285  76b8114d-513d-4757-8c4e-cb54968ecf2f 7318 0 2022-09-05 14:34:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0ef9f8fc-59a7-4550-a669-e126c06dc5ee 0xc0049caa50 0xc0049caa51}] []  [{kube-controller-manager Update v1 2022-09-05 14:34:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0ef9f8fc-59a7-4550-a669-e126c06dc5ee\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-05 14:34:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.21\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-n2kjm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n2kjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-s5wz98-worker-9oo03u,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.21,StartTime:2022-09-05 14:34:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.21,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  5 14:34:59.102: INFO: Pod "webserver-deployment-847dcfb7fb-4sjwt" is available:
    &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-4sjwt webserver-deployment-847dcfb7fb- deployment-7285  4af3a48b-15c1-4e12-8343-88b4eafe6c20 7223 0 2022-09-05 14:34:46 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 68a7760d-df6c-4cb8-befb-13e2675b0b75 0xc0049cac50 0xc0049cac51}] []  [{kube-controller-manager Update v1 2022-09-05 14:34:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68a7760d-df6c-4cb8-befb-13e2675b0b75\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-05 14:34:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.27\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qt5zd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qt5zd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.27,StartTime:2022-09-05 14:34:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-05 14:34:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://64fa47fbf2bc6ef5b9a9ffc8a0dd4ff28815df24324284a6987dcb7e22f0e559,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  5 14:34:59.103: INFO: Pod "webserver-deployment-847dcfb7fb-8kn6k" is available:
    &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-8kn6k webserver-deployment-847dcfb7fb- deployment-7285  93725404-6e0a-4adb-84a6-08dbd157d476 7154 0 2022-09-05 14:34:46 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 68a7760d-df6c-4cb8-befb-13e2675b0b75 0xc0049cae20 0xc0049cae21}] []  [{kube-controller-manager Update v1 2022-09-05 14:34:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68a7760d-df6c-4cb8-befb-13e2675b0b75\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-05 14:34:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.20\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dsbp2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dsbp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-9jgcx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.20,StartTime:2022-09-05 14:34:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-05 14:34:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://e8cce0553d0be2971028e590e2722128bb495ec65cdd6e1425baadbd2105c14c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  5 14:34:59.104: INFO: Pod "webserver-deployment-847dcfb7fb-c4frr" is available:
    &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-c4frr webserver-deployment-847dcfb7fb- deployment-7285  7196ac8e-0b00-4259-81ea-5f707ec40eda 7183 0 2022-09-05 14:34:46 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 68a7760d-df6c-4cb8-befb-13e2675b0b75 0xc0049caff0 0xc0049caff1}] []  [{kube-controller-manager Update v1 2022-09-05 14:34:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68a7760d-df6c-4cb8-befb-13e2675b0b75\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-05 14:34:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.21\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-s68tb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s68tb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-9jgcx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 14:34:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.21,StartTime:2022-09-05 14:34:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-05 14:34:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://530b056462a6441805c72fdd2d83bc9d3bb6579358d2a91ba2a2dc19d727a2c5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.21,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:34:59.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-7285" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":22,"skipped":389,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:34:56.212: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-map-36c6fa0d-4216-4e2c-bbde-901dae0d116c
    STEP: Creating a pod to test consume secrets
    Sep  5 14:34:56.742: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-080dec38-45e5-4038-b036-8a1010371e3a" in namespace "projected-8464" to be "Succeeded or Failed"

    Sep  5 14:34:56.808: INFO: Pod "pod-projected-secrets-080dec38-45e5-4038-b036-8a1010371e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 65.257638ms
    Sep  5 14:34:58.816: INFO: Pod "pod-projected-secrets-080dec38-45e5-4038-b036-8a1010371e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074042785s
    Sep  5 14:35:00.826: INFO: Pod "pod-projected-secrets-080dec38-45e5-4038-b036-8a1010371e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083385672s
    Sep  5 14:35:02.834: INFO: Pod "pod-projected-secrets-080dec38-45e5-4038-b036-8a1010371e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09199362s
    Sep  5 14:35:04.845: INFO: Pod "pod-projected-secrets-080dec38-45e5-4038-b036-8a1010371e3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102290479s
    STEP: Saw pod success
    Sep  5 14:35:04.845: INFO: Pod "pod-projected-secrets-080dec38-45e5-4038-b036-8a1010371e3a" satisfied condition "Succeeded or Failed"

    Sep  5 14:35:04.852: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod pod-projected-secrets-080dec38-45e5-4038-b036-8a1010371e3a container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 14:35:04.894: INFO: Waiting for pod pod-projected-secrets-080dec38-45e5-4038-b036-8a1010371e3a to disappear
    Sep  5 14:35:04.902: INFO: Pod pod-projected-secrets-080dec38-45e5-4038-b036-8a1010371e3a no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:35:04.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-8464" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":203,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
    STEP: Destroying namespace "webhook-6174-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":23,"skipped":397,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:35:05.075: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir volume type on node default medium
    Sep  5 14:35:05.215: INFO: Waiting up to 5m0s for pod "pod-f209900f-8f00-4cb7-908b-3564941d25b0" in namespace "emptydir-3162" to be "Succeeded or Failed"

    Sep  5 14:35:05.235: INFO: Pod "pod-f209900f-8f00-4cb7-908b-3564941d25b0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.609144ms
    Sep  5 14:35:07.274: INFO: Pod "pod-f209900f-8f00-4cb7-908b-3564941d25b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056592876s
    Sep  5 14:35:09.310: INFO: Pod "pod-f209900f-8f00-4cb7-908b-3564941d25b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092752996s
    Sep  5 14:35:11.342: INFO: Pod "pod-f209900f-8f00-4cb7-908b-3564941d25b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.124554127s
    STEP: Saw pod success
    Sep  5 14:35:11.342: INFO: Pod "pod-f209900f-8f00-4cb7-908b-3564941d25b0" satisfied condition "Succeeded or Failed"

    Sep  5 14:35:11.356: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd pod pod-f209900f-8f00-4cb7-908b-3564941d25b0 container test-container: <nil>
    STEP: delete the pod
    Sep  5 14:35:11.391: INFO: Waiting for pod pod-f209900f-8f00-4cb7-908b-3564941d25b0 to disappear
    Sep  5 14:35:11.402: INFO: Pod pod-f209900f-8f00-4cb7-908b-3564941d25b0 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 8 lines ...
    Sep  5 14:35:10.578: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep  5 14:35:10.732: INFO: Waiting up to 5m0s for pod "pod-96f67fc1-d9db-4d3f-9ee3-926e75207dc2" in namespace "emptydir-3691" to be "Succeeded or Failed"

    Sep  5 14:35:10.761: INFO: Pod "pod-96f67fc1-d9db-4d3f-9ee3-926e75207dc2": Phase="Pending", Reason="", readiness=false. Elapsed: 29.619997ms
    Sep  5 14:35:12.770: INFO: Pod "pod-96f67fc1-d9db-4d3f-9ee3-926e75207dc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038604979s
    Sep  5 14:35:14.778: INFO: Pod "pod-96f67fc1-d9db-4d3f-9ee3-926e75207dc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045844795s
    STEP: Saw pod success
    Sep  5 14:35:14.778: INFO: Pod "pod-96f67fc1-d9db-4d3f-9ee3-926e75207dc2" satisfied condition "Succeeded or Failed"

    Sep  5 14:35:14.788: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod pod-96f67fc1-d9db-4d3f-9ee3-926e75207dc2 container test-container: <nil>
    STEP: delete the pod
    Sep  5 14:35:14.826: INFO: Waiting for pod pod-96f67fc1-d9db-4d3f-9ee3-926e75207dc2 to disappear
    Sep  5 14:35:14.834: INFO: Pod pod-96f67fc1-d9db-4d3f-9ee3-926e75207dc2 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:35:14.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-3691" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":404,"failed":0}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:35:14.856: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
    STEP: Destroying namespace "webhook-2636-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":25,"skipped":404,"failed":0}

    
    SSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":230,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}

    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:35:11.437: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-runtime
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:35:41.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-701" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":230,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:35:43.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-8262" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":622,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 125 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:35:44.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-9269" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":26,"skipped":412,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:35:41.366: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override arguments
    Sep  5 14:35:41.557: INFO: Waiting up to 5m0s for pod "client-containers-cf377384-2078-4a7d-83bd-c36c0a3e4428" in namespace "containers-7627" to be "Succeeded or Failed"

    Sep  5 14:35:41.565: INFO: Pod "client-containers-cf377384-2078-4a7d-83bd-c36c0a3e4428": Phase="Pending", Reason="", readiness=false. Elapsed: 7.994538ms
    Sep  5 14:35:43.572: INFO: Pod "client-containers-cf377384-2078-4a7d-83bd-c36c0a3e4428": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01479137s
    Sep  5 14:35:45.582: INFO: Pod "client-containers-cf377384-2078-4a7d-83bd-c36c0a3e4428": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025320899s
    Sep  5 14:35:47.591: INFO: Pod "client-containers-cf377384-2078-4a7d-83bd-c36c0a3e4428": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033824036s
    STEP: Saw pod success
    Sep  5 14:35:47.591: INFO: Pod "client-containers-cf377384-2078-4a7d-83bd-c36c0a3e4428" satisfied condition "Succeeded or Failed"

    Sep  5 14:35:47.599: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod client-containers-cf377384-2078-4a7d-83bd-c36c0a3e4428 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 14:35:47.641: INFO: Waiting for pod client-containers-cf377384-2078-4a7d-83bd-c36c0a3e4428 to disappear
    Sep  5 14:35:47.661: INFO: Pod client-containers-cf377384-2078-4a7d-83bd-c36c0a3e4428 no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:35:47.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-7627" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":232,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:35:44.836: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a container's command [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in container's command
    Sep  5 14:35:44.975: INFO: Waiting up to 5m0s for pod "var-expansion-f35a64a5-21bd-4877-ab32-1db2dd890dba" in namespace "var-expansion-2439" to be "Succeeded or Failed"

    Sep  5 14:35:44.992: INFO: Pod "var-expansion-f35a64a5-21bd-4877-ab32-1db2dd890dba": Phase="Pending", Reason="", readiness=false. Elapsed: 17.341457ms
    Sep  5 14:35:47.001: INFO: Pod "var-expansion-f35a64a5-21bd-4877-ab32-1db2dd890dba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025557474s
    Sep  5 14:35:49.014: INFO: Pod "var-expansion-f35a64a5-21bd-4877-ab32-1db2dd890dba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038603382s
    STEP: Saw pod success
    Sep  5 14:35:49.014: INFO: Pod "var-expansion-f35a64a5-21bd-4877-ab32-1db2dd890dba" satisfied condition "Succeeded or Failed"

    Sep  5 14:35:49.026: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod var-expansion-f35a64a5-21bd-4877-ab32-1db2dd890dba container dapi-container: <nil>
    STEP: delete the pod
    Sep  5 14:35:49.082: INFO: Waiting for pod var-expansion-f35a64a5-21bd-4877-ab32-1db2dd890dba to disappear
    Sep  5 14:35:49.096: INFO: Pod var-expansion-f35a64a5-21bd-4877-ab32-1db2dd890dba no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:35:49.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-2439" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":414,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:36:12.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-7156" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":27,"skipped":647,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 37 lines ...
    Sep  5 14:36:28.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-74 explain e2e-test-crd-publish-openapi-8361-crds.spec'
    Sep  5 14:36:28.639: INFO: stderr: ""
    Sep  5 14:36:28.639: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-8361-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
    Sep  5 14:36:28.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-74 explain e2e-test-crd-publish-openapi-8361-crds.spec.bars'
    Sep  5 14:36:29.082: INFO: stderr: ""
    Sep  5 14:36:29.082: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-8361-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
    STEP: kubectl explain works to return error when explain is called on property that doesn't exist

    Sep  5 14:36:29.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-74 explain e2e-test-crd-publish-openapi-8361-crds.spec.bars2'
    Sep  5 14:36:29.546: INFO: rc: 1
    [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:36:32.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-74" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":28,"skipped":669,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:37:01.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-5178" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":14,"skipped":234,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:37:02.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-5944" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":15,"skipped":242,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:37:02.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-4792" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":16,"skipped":277,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
    STEP: Deploying the webhook service
    STEP: Verifying the service has paired with the endpoint
    Sep  5 14:37:08.845: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] listing mutating webhooks should work [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Listing all of the created validation webhooks
    Sep  5 14:37:42.993: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.StatusError | 0xc000894be0>: {
            ErrStatus: {
                TypeMeta: {Kind: "", APIVersion: ""},
                ListMeta: {
                    SelfLink: "",
                    ResourceVersion: "",
... skipping 34 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      listing mutating webhooks should work [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 14:37:42.994: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.StatusError | 0xc000894be0>: {
              ErrStatus: {
                  TypeMeta: {Kind: "", APIVersion: ""},
                  ListMeta: {
                      SelfLink: "",
                      ResourceVersion: "",
... skipping 9 lines ...
          }
          Timeout: request did not complete within requested timeout - context deadline exceeded
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:680
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":16,"skipped":284,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:37:43.171: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
    STEP: Destroying namespace "webhook-6562-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":17,"skipped":284,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    [BeforeEach] [sig-node] Lease
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:37:48.312: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename lease-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 3 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:37:48.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "lease-test-12" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":18,"skipped":284,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:37:48.881: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  5 14:37:48.956: INFO: Waiting up to 5m0s for pod "busybox-user-65534-4ef64dd6-4153-42ce-a8f0-0cb62e54f9bc" in namespace "security-context-test-6112" to be "Succeeded or Failed"

    Sep  5 14:37:48.962: INFO: Pod "busybox-user-65534-4ef64dd6-4153-42ce-a8f0-0cb62e54f9bc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.366656ms
    Sep  5 14:37:50.970: INFO: Pod "busybox-user-65534-4ef64dd6-4153-42ce-a8f0-0cb62e54f9bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014013006s
    Sep  5 14:37:52.980: INFO: Pod "busybox-user-65534-4ef64dd6-4153-42ce-a8f0-0cb62e54f9bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023587978s
    Sep  5 14:37:52.980: INFO: Pod "busybox-user-65534-4ef64dd6-4153-42ce-a8f0-0cb62e54f9bc" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:37:52.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-6112" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":328,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    
    SSSS
    ------------------------------
    {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":14,"skipped":272,"failed":0}

    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:33:49.404: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-probe
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
    • [SLOW TEST:243.769 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":272,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:37:53.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-1424" for this suite.
    
    •S
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":20,"skipped":332,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
    STEP: Destroying namespace "services-3168" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":21,"skipped":349,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Destroying namespace "services-5813" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":22,"skipped":407,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:37:53.413: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via environment variable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap configmap-828/configmap-test-10121904-075a-4a38-b6b8-a5150b6e4ddc
    STEP: Creating a pod to test consume configMaps
    Sep  5 14:37:53.532: INFO: Waiting up to 5m0s for pod "pod-configmaps-3f4d91c6-86ed-4726-b169-4662c6672fce" in namespace "configmap-828" to be "Succeeded or Failed"

    Sep  5 14:37:53.550: INFO: Pod "pod-configmaps-3f4d91c6-86ed-4726-b169-4662c6672fce": Phase="Pending", Reason="", readiness=false. Elapsed: 17.548166ms
    Sep  5 14:37:55.558: INFO: Pod "pod-configmaps-3f4d91c6-86ed-4726-b169-4662c6672fce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025909353s
    Sep  5 14:37:57.568: INFO: Pod "pod-configmaps-3f4d91c6-86ed-4726-b169-4662c6672fce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035455541s
    STEP: Saw pod success
    Sep  5 14:37:57.568: INFO: Pod "pod-configmaps-3f4d91c6-86ed-4726-b169-4662c6672fce" satisfied condition "Succeeded or Failed"

    Sep  5 14:37:57.573: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd pod pod-configmaps-3f4d91c6-86ed-4726-b169-4662c6672fce container env-test: <nil>
    STEP: delete the pod
    Sep  5 14:37:57.624: INFO: Waiting for pod pod-configmaps-3f4d91c6-86ed-4726-b169-4662c6672fce to disappear
    Sep  5 14:37:57.631: INFO: Pod pod-configmaps-3f4d91c6-86ed-4726-b169-4662c6672fce no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:37:57.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-828" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":330,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:37:58.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-7302" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":23,"skipped":415,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 37 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:37:59.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-3022" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":24,"skipped":447,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:37:57.674: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating secret secrets-5850/secret-test-543d7f5a-2b3b-4e94-84f6-cf9d5e63cc18
    STEP: Creating a pod to test consume secrets
    Sep  5 14:37:57.775: INFO: Waiting up to 5m0s for pod "pod-configmaps-616098eb-c6ce-49c8-bb10-aa5fe29f6564" in namespace "secrets-5850" to be "Succeeded or Failed"

    Sep  5 14:37:57.791: INFO: Pod "pod-configmaps-616098eb-c6ce-49c8-bb10-aa5fe29f6564": Phase="Pending", Reason="", readiness=false. Elapsed: 16.019292ms
    Sep  5 14:37:59.799: INFO: Pod "pod-configmaps-616098eb-c6ce-49c8-bb10-aa5fe29f6564": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023993763s
    Sep  5 14:38:01.808: INFO: Pod "pod-configmaps-616098eb-c6ce-49c8-bb10-aa5fe29f6564": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032880063s
    STEP: Saw pod success
    Sep  5 14:38:01.808: INFO: Pod "pod-configmaps-616098eb-c6ce-49c8-bb10-aa5fe29f6564" satisfied condition "Succeeded or Failed"

    Sep  5 14:38:01.814: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod pod-configmaps-616098eb-c6ce-49c8-bb10-aa5fe29f6564 container env-test: <nil>
    STEP: delete the pod
    Sep  5 14:38:01.864: INFO: Waiting for pod pod-configmaps-616098eb-c6ce-49c8-bb10-aa5fe29f6564 to disappear
    Sep  5 14:38:01.871: INFO: Pod pod-configmaps-616098eb-c6ce-49c8-bb10-aa5fe29f6564 no longer exists
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:38:01.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-5850" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":334,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:38:04.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-432" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":506,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Servers with support for Table transformation
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:38:04.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "tables-3637" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":26,"skipped":511,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:38:04.748: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a volume subpath [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in volume subpath
    Sep  5 14:38:04.825: INFO: Waiting up to 5m0s for pod "var-expansion-fb9e5d6e-bf40-4d33-b7ba-313cfae6c6fc" in namespace "var-expansion-1259" to be "Succeeded or Failed"

    Sep  5 14:38:04.833: INFO: Pod "var-expansion-fb9e5d6e-bf40-4d33-b7ba-313cfae6c6fc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.704017ms
    Sep  5 14:38:06.839: INFO: Pod "var-expansion-fb9e5d6e-bf40-4d33-b7ba-313cfae6c6fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014357883s
    Sep  5 14:38:08.848: INFO: Pod "var-expansion-fb9e5d6e-bf40-4d33-b7ba-313cfae6c6fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022711228s
    STEP: Saw pod success
    Sep  5 14:38:08.848: INFO: Pod "var-expansion-fb9e5d6e-bf40-4d33-b7ba-313cfae6c6fc" satisfied condition "Succeeded or Failed"

    Sep  5 14:38:08.853: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd pod var-expansion-fb9e5d6e-bf40-4d33-b7ba-313cfae6c6fc container dapi-container: <nil>
    STEP: delete the pod
    Sep  5 14:38:08.884: INFO: Waiting for pod var-expansion-fb9e5d6e-bf40-4d33-b7ba-313cfae6c6fc to disappear
    Sep  5 14:38:08.890: INFO: Pod var-expansion-fb9e5d6e-bf40-4d33-b7ba-313cfae6c6fc no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:38:08.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-1259" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":27,"skipped":582,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:38:08.938: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  5 14:38:11.034: INFO: Deleting pod "var-expansion-b6b5c97d-43b0-4b53-9548-54a8cc5f25b8" in namespace "var-expansion-3641"
    Sep  5 14:38:11.048: INFO: Wait up to 5m0s for pod "var-expansion-b6b5c97d-43b0-4b53-9548-54a8cc5f25b8" to be fully deleted
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:38:13.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-3641" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":28,"skipped":587,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:38:14.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-3567" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":29,"skipped":598,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 14:38:14.723: INFO: Waiting up to 5m0s for pod "downwardapi-volume-35aaa2cc-d875-4922-b6da-1941e20f1c31" in namespace "projected-2452" to be "Succeeded or Failed"

    Sep  5 14:38:14.729: INFO: Pod "downwardapi-volume-35aaa2cc-d875-4922-b6da-1941e20f1c31": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0885ms
    Sep  5 14:38:16.735: INFO: Pod "downwardapi-volume-35aaa2cc-d875-4922-b6da-1941e20f1c31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012364694s
    Sep  5 14:38:18.744: INFO: Pod "downwardapi-volume-35aaa2cc-d875-4922-b6da-1941e20f1c31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021327058s
    STEP: Saw pod success
    Sep  5 14:38:18.744: INFO: Pod "downwardapi-volume-35aaa2cc-d875-4922-b6da-1941e20f1c31" satisfied condition "Succeeded or Failed"

    Sep  5 14:38:18.750: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-9oo03u pod downwardapi-volume-35aaa2cc-d875-4922-b6da-1941e20f1c31 container client-container: <nil>
    STEP: delete the pod
    Sep  5 14:38:18.802: INFO: Waiting for pod downwardapi-volume-35aaa2cc-d875-4922-b6da-1941e20f1c31 to disappear
    Sep  5 14:38:18.813: INFO: Pod downwardapi-volume-35aaa2cc-d875-4922-b6da-1941e20f1c31 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:38:18.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2452" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":601,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:38:23.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-3687" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":18,"skipped":361,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:38:23.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-3644" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":19,"skipped":373,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:38:18.877: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-1450311a-e953-4c1c-b279-d3c6434dea7d
    STEP: Creating a pod to test consume configMaps
    Sep  5 14:38:18.969: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e58c9f8a-3113-46ef-88f0-25d85a2295d0" in namespace "projected-7338" to be "Succeeded or Failed"

    Sep  5 14:38:18.975: INFO: Pod "pod-projected-configmaps-e58c9f8a-3113-46ef-88f0-25d85a2295d0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.843074ms
    Sep  5 14:38:20.984: INFO: Pod "pod-projected-configmaps-e58c9f8a-3113-46ef-88f0-25d85a2295d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014972542s
    Sep  5 14:38:22.992: INFO: Pod "pod-projected-configmaps-e58c9f8a-3113-46ef-88f0-25d85a2295d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022267743s
    Sep  5 14:38:25.001: INFO: Pod "pod-projected-configmaps-e58c9f8a-3113-46ef-88f0-25d85a2295d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031219775s
    STEP: Saw pod success
    Sep  5 14:38:25.001: INFO: Pod "pod-projected-configmaps-e58c9f8a-3113-46ef-88f0-25d85a2295d0" satisfied condition "Succeeded or Failed"

    Sep  5 14:38:25.008: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-9jgcx pod pod-projected-configmaps-e58c9f8a-3113-46ef-88f0-25d85a2295d0 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 14:38:25.061: INFO: Waiting for pod pod-projected-configmaps-e58c9f8a-3113-46ef-88f0-25d85a2295d0 to disappear
    Sep  5 14:38:25.068: INFO: Pod pod-projected-configmaps-e58c9f8a-3113-46ef-88f0-25d85a2295d0 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:38:25.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7338" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":613,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:38:27.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-8947" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":617,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 14:38:27.363: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e69c44ef-e1bb-4546-a9e4-2c8a256fa034" in namespace "projected-2548" to be "Succeeded or Failed"

    Sep  5 14:38:27.370: INFO: Pod "downwardapi-volume-e69c44ef-e1bb-4546-a9e4-2c8a256fa034": Phase="Pending", Reason="", readiness=false. Elapsed: 6.682948ms
    Sep  5 14:38:29.378: INFO: Pod "downwardapi-volume-e69c44ef-e1bb-4546-a9e4-2c8a256fa034": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014931019s
    Sep  5 14:38:31.388: INFO: Pod "downwardapi-volume-e69c44ef-e1bb-4546-a9e4-2c8a256fa034": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024828815s
    STEP: Saw pod success
    Sep  5 14:38:31.388: INFO: Pod "downwardapi-volume-e69c44ef-e1bb-4546-a9e4-2c8a256fa034" satisfied condition "Succeeded or Failed"

    Sep  5 14:38:31.396: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-9jgcx pod downwardapi-volume-e69c44ef-e1bb-4546-a9e4-2c8a256fa034 container client-container: <nil>
    STEP: delete the pod
    Sep  5 14:38:31.429: INFO: Waiting for pod downwardapi-volume-e69c44ef-e1bb-4546-a9e4-2c8a256fa034 to disappear
    Sep  5 14:38:31.436: INFO: Pod downwardapi-volume-e69c44ef-e1bb-4546-a9e4-2c8a256fa034 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:38:31.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2548" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":626,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:38:31.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-1797" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":34,"skipped":656,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:38:42.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-3329" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":35,"skipped":671,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:38:46.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-665" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":379,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:38:53.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-5343" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":21,"skipped":411,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected combined
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-projected-all-test-volume-9bc41870-8fd6-40f6-96dd-288187cab300
    STEP: Creating secret with name secret-projected-all-test-volume-39384dee-526e-4f05-82ca-54daeef81d29
    STEP: Creating a pod to test Check all projections for projected volume plugin
    Sep  5 14:38:53.729: INFO: Waiting up to 5m0s for pod "projected-volume-5dcd76fa-b84b-47cb-8528-7d2fe3b71ba2" in namespace "projected-6009" to be "Succeeded or Failed"

    Sep  5 14:38:53.734: INFO: Pod "projected-volume-5dcd76fa-b84b-47cb-8528-7d2fe3b71ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.959935ms
    Sep  5 14:38:55.741: INFO: Pod "projected-volume-5dcd76fa-b84b-47cb-8528-7d2fe3b71ba2": Phase="Running", Reason="", readiness=true. Elapsed: 2.012479214s
    Sep  5 14:38:57.750: INFO: Pod "projected-volume-5dcd76fa-b84b-47cb-8528-7d2fe3b71ba2": Phase="Running", Reason="", readiness=false. Elapsed: 4.021478919s
    Sep  5 14:38:59.755: INFO: Pod "projected-volume-5dcd76fa-b84b-47cb-8528-7d2fe3b71ba2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026531243s
    STEP: Saw pod success
    Sep  5 14:38:59.755: INFO: Pod "projected-volume-5dcd76fa-b84b-47cb-8528-7d2fe3b71ba2" satisfied condition "Succeeded or Failed"

    Sep  5 14:38:59.760: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-9oo03u pod projected-volume-5dcd76fa-b84b-47cb-8528-7d2fe3b71ba2 container projected-all-volume-test: <nil>
    STEP: delete the pod
    Sep  5 14:38:59.786: INFO: Waiting for pod projected-volume-5dcd76fa-b84b-47cb-8528-7d2fe3b71ba2 to disappear
    Sep  5 14:38:59.791: INFO: Pod projected-volume-5dcd76fa-b84b-47cb-8528-7d2fe3b71ba2 no longer exists
    [AfterEach] [sig-storage] Projected combined
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:38:59.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-6009" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":512,"failed":0}

    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:38:59.809: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename custom-resource-definition
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:39:06.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-1736" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":23,"skipped":512,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Aggregator
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
    Sep  5 14:38:49.741: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63797985523, loc:(*time.Location)(0xa04a040)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797985523, loc:(*time.Location)(0xa04a040)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63797985523, loc:(*time.Location)(0xa04a040)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797985523, loc:(*time.Location)(0xa04a040)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
    Sep  5 14:38:51.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63797985523, loc:(*time.Location)(0xa04a040)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797985523, loc:(*time.Location)(0xa04a040)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63797985523, loc:(*time.Location)(0xa04a040)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797985523, loc:(*time.Location)(0xa04a040)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
    Sep  5 14:38:53.742: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63797985523, loc:(*time.Location)(0xa04a040)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797985523, loc:(*time.Location)(0xa04a040)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63797985523, loc:(*time.Location)(0xa04a040)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797985523, loc:(*time.Location)(0xa04a040)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
    Sep  5 14:39:55.954: INFO: Waited 1m0.203602682s for the sample-apiserver to be ready to handle requests.
    Sep  5 14:39:55.954: INFO: current APIService: {"metadata":{"name":"v1alpha1.wardle.example.com","uid":"9e9ff100-eb32-4b0c-a3b4-eb528e48be78","resourceVersion":"9943","creationTimestamp":"2022-09-05T14:38:55Z","managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2022-09-05T14:38:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:caBundle":{},"f:group":{},"f:groupPriorityMinimum":{},"f:service":{".":{},"f:name":{},"f:namespace":{},"f:port":{}},"f:version":{},"f:versionPriority":{}}}},{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2022-09-05T14:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}},"subresource":"status"}]},"spec":{"service":{"namespace":"aggregator-224","name":"sample-api","port":7443},"group":"wardle.example.com","version":"v1alpha1","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGakNDQWY2Z0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFkTVJzd0dRWURWUVFERXhKbE1tVXQKYzJWeWRtVnlMV05sY25RdFkyRXdIaGNOTWpJd09UQTFNVFF6T0RReldoY05Nekl3T1RBeU1UUXpPRFF6V2pBZApNUnN3R1FZRFZRUURFeEpsTW1VdGMyVnlkbVZ5TFdObGNuUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUURWMHVkQ1JUVCt2ODhLenRMempmaXJtWjNOc0dZQ1RVektXajAvUUtJNVVnN2EKZTIzQU16SHpyTERtYmVRa2d4LzZqbGxleDdnZGZmc01vaU11K0xFRWtWd1JpU3hycTNiWDBmRWsrazZ2aXlHagpTcE52YUMyR09iT1NjSFQzcEtGRHpQOFlWblpjbXJadEF1dWl3R0Y4MWU0clRmZW85Rzl6dlh2Wi9EWnFiOUV0CndWUWVyZXFVQThJNjBwTE5jd2l3b2s5S0FIenlRaWIyZUJFWnhsWVFZNWZYbkxkTkhRdnBCQmJRVjlEVHNZS2UKMU1KcDRBbTB6aDAvQ2laN3A4L3JZZjBBZE8zczk0aHJSNkxRTUFsZTRwcWN2dW1hYXdJbXpvWjIrTHRrWDJwbgpManhkUlpaTXhIQ09TWitoQlZBYU04N1g0dmhmODk4MGJnUWtpTGRMQWdNQkFBR2pZVEJmTUE0R0ExVWREd0VCCi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJUbEpFYy9YYVR6ajBLNGY5NmYKYXBTa0JTdDBoekFkQmdOVkhSRUVGakFVZ2hKbE1tVXRjMlZ5ZG1WeUxXTmxjblF0WTJFd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBSEFjNktIWmozbkZmOTdvZ2wrcVp4L3FiRWhzT0UwNkVqd242UlpjTVFmeCs2MGtTOW1aCkk3NUlzZkRnN3pKWVMwNE5EZldkRUg1cXFpOTVrU1JVRk50R2ZYU2orSWhaS29uQVUzU2tzdFQxRWl6Uy9QNDYKTndwZVd6TTF4N2pCNzFyRndBdWhubWdNdnVVV1k5Sno2aUxjajhjdFFlWFF0SDVNbUNESzc1Uk85bEJqZVRuSApOdm9mS2RpQkFNcmJUWnNTZFJMYlJRVXFadS9hVzR2QTFUWXRYUFF2eFJXVk5nU2dmSWRQOENVYUttcWN2dTlUCi9YV1ZIVUFyTXZldzg3Nm5JWEFBR292Zks1Qm1JdExvRFVrVSt5UlJiQXBMQTAzZm1EWWVKOGtPRk5CaDlXWVkKNU1xUDFQWWNLdVZTRkg5d3JMWDVSTnFvSDM3czYvcjk3NTg9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K","groupPriorityMinimum":2000,"versionPriority":200},"status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2022-09-05T14:38:55Z","reason":"FailedDiscoveryCheck","message":"failing or missing response from https://10.132.68.32:7443/apis/wardle.example.com/v1alpha1: Get \"https://10.132.68.32:7443/apis/wardle.example.com/v1alpha1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"}]}}
    Sep  5 14:39:55.956: INFO: current pods: {"metadata":{"resourceVersion":"9943"},"items":[{"metadata":{"name":"sample-apiserver-deployment-64f6b9dc99-ldzmd","generateName":"sample-apiserver-deployment-64f6b9dc99-","namespace":"aggregator-224","uid":"43275a6d-25f0-4fc2-8736-0521c7802b1e","resourceVersion":"9700","creationTimestamp":"2022-09-05T14:38:43Z","labels":{"apiserver":"true","app":"sample-apiserver","pod-template-hash":"64f6b9dc99"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"sample-apiserver-deployment-64f6b9dc99","uid":"bcba298b-8762-44a7-9c31-d0419a6ee97e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-05T14:38:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:apiserver":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcba298b-8762-44a7-9c31-d0419a6ee97e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"etcd\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}},"k:{\"name\":\"sample-apiserver\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/apiserver.local.config/certificates\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"apiserver-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-05T14:38:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.43\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"apiserver-certs","secret":{"secretName":"sample-apiserver-secret","defaultMode":420}},{"name":"kube-api-access-tc5rp","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"sample-apiserver","image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4","args":["--etcd-servers=http://127.0.0.1:2379","--tls-cert-file=/apiserver.local.config/certificates/tls.crt","--tls-private-key-file=/apiserver.local.config/certificates/tls.key","--audit-log-path=-","--audit-log-maxage=0","--audit-log-maxbackup=0"],"resources":{},"volumeMounts":[{"name":"apiserver-certs","readOnly":true,"mountPath":"/apiserver.local.config/certificates"},{"name":"kube-api-access-tc5rp","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"etcd","image":"k8s.gcr.io/etcd:3.4.13-0","command":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"],"resources":{},"volumeMounts":[{"name":"kube-api-access-tc5rp","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-05T14:38:43Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-05T14:38:54Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-05T14:38:54Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-05T14:38:43Z"}],"hostIP":"172.18.0.6","podIP":"192.168.2.43","podIPs":[{"ip":"192.168.2.43"}],"startTime":"2022-09-05T14:38:43Z","containerStatuses":[{"name":"etcd","state":{"running":{"startedAt":"2022-09-05T14:38:54Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/etcd:3.4.13-0","imageID":"k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2","containerID":"containerd://09235259784d4178b405a6e5e39e1caeb517e991bad5040f305e35acdde0b123","started":true},{"name":"sample-apiserver","state":{"running":{"startedAt":"2022-09-05T14:38:47Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4","imageID":"k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276","containerID":"containerd://8d39e1c07d2250df3c7fac1e8858fff016a811a5d20c96b63db22129e2331f7d","started":true}],"qosClass":"BestEffort"}}]}
    Sep  5 14:39:55.972: INFO: logs of sample-apiserver-deployment-64f6b9dc99-ldzmd/sample-apiserver (error: <nil>): W0905 14:38:48.288780       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found

    W0905 14:38:48.288907       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
    I0905 14:38:48.401447       1 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,BanFlunder.
    I0905 14:38:48.401486       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
    I0905 14:38:48.405262       1 client.go:361] parsed scheme: "endpoint"
    I0905 14:38:48.405361       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    W0905 14:38:48.406461       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    I0905 14:38:48.473000       1 client.go:361] parsed scheme: "endpoint"
    I0905 14:38:48.473214       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    W0905 14:38:48.474463       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    W0905 14:38:49.407309       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    W0905 14:38:49.475669       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    W0905 14:38:51.072817       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    W0905 14:38:51.094989       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    W0905 14:38:53.262153       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    W0905 14:38:54.038019       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    I0905 14:38:58.014577       1 client.go:361] parsed scheme: "endpoint"
    I0905 14:38:58.014647       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    I0905 14:38:58.016459       1 client.go:361] parsed scheme: "endpoint"
    I0905 14:38:58.016508       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    I0905 14:38:58.017837       1 client.go:361] parsed scheme: "endpoint"
    I0905 14:38:58.017920       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
... skipping 4 lines ...
    I0905 14:38:58.094829       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
    I0905 14:38:58.095133       1 secure_serving.go:178] Serving securely on [::]:443
    I0905 14:38:58.095428       1 tlsconfig.go:219] Starting DynamicServingCertificateController
    I0905 14:38:58.194962       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
    I0905 14:38:58.195008       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
    
    Sep  5 14:39:55.981: INFO: logs of sample-apiserver-deployment-64f6b9dc99-ldzmd/etcd (error: <nil>): [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead

    2022-09-05 14:38:54.629097 I | etcdmain: etcd Version: 3.4.13
    2022-09-05 14:38:54.629188 I | etcdmain: Git SHA: ae9734ed2
    2022-09-05 14:38:54.629192 I | etcdmain: Go Version: go1.12.17
    2022-09-05 14:38:54.629205 I | etcdmain: Go OS/Arch: linux/amd64
    2022-09-05 14:38:54.629228 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
    2022-09-05 14:38:54.629237 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
... skipping 26 lines ...
    2022-09-05 14:38:55.084164 N | etcdserver/membership: set the initial cluster version to 3.4
    2022-09-05 14:38:55.084274 I | etcdserver/api: enabled capabilities for version 3.4
    2022-09-05 14:38:55.084302 I | etcdserver: published {Name:default ClientURLs:[http://127.0.0.1:2379]} to cluster cdf818194e3a8c32
    2022-09-05 14:38:55.084344 I | embed: ready to serve client requests
    2022-09-05 14:38:55.085636 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
    
    Sep  5 14:39:55.981: FAIL: gave up waiting for apiservice wardle to come up successfully

    Unexpected error:

        <*errors.errorString | 0xc000248290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 22 lines ...
    [sig-api-machinery] Aggregator
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 14:39:55.982: gave up waiting for apiservice wardle to come up successfully
      Unexpected error:

          <*errors.errorString | 0xc000248290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:39:58.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-2708" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":531,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 43 lines ...
    STEP: Destroying namespace "services-2138" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":25,"skipped":552,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:40:03.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-7736" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":567,"failed":0}

    
    SS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":35,"skipped":825,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    [BeforeEach] [sig-api-machinery] Aggregator
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:39:56.308: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename aggregator
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:40:07.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "aggregator-7257" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":36,"skipped":825,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:40:07.622: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep  5 14:40:07.977: INFO: Waiting up to 5m0s for pod "pod-dda735fe-1596-4a12-a5c8-3f655f22a838" in namespace "emptydir-9546" to be "Succeeded or Failed"

    Sep  5 14:40:07.994: INFO: Pod "pod-dda735fe-1596-4a12-a5c8-3f655f22a838": Phase="Pending", Reason="", readiness=false. Elapsed: 16.365383ms
    Sep  5 14:40:10.000: INFO: Pod "pod-dda735fe-1596-4a12-a5c8-3f655f22a838": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022469583s
    Sep  5 14:40:12.006: INFO: Pod "pod-dda735fe-1596-4a12-a5c8-3f655f22a838": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028500107s
    STEP: Saw pod success
    Sep  5 14:40:12.006: INFO: Pod "pod-dda735fe-1596-4a12-a5c8-3f655f22a838" satisfied condition "Succeeded or Failed"

    Sep  5 14:40:12.011: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod pod-dda735fe-1596-4a12-a5c8-3f655f22a838 container test-container: <nil>
    STEP: delete the pod
    Sep  5 14:40:12.042: INFO: Waiting for pod pod-dda735fe-1596-4a12-a5c8-3f655f22a838 to disappear
    Sep  5 14:40:12.046: INFO: Pod pod-dda735fe-1596-4a12-a5c8-3f655f22a838 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:40:12.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-9546" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":837,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:40:12.108: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep  5 14:40:12.160: INFO: Waiting up to 5m0s for pod "pod-3c31d8c3-c92e-439e-ab16-910dd39ef85c" in namespace "emptydir-1213" to be "Succeeded or Failed"

    Sep  5 14:40:12.165: INFO: Pod "pod-3c31d8c3-c92e-439e-ab16-910dd39ef85c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.52853ms
    Sep  5 14:40:14.170: INFO: Pod "pod-3c31d8c3-c92e-439e-ab16-910dd39ef85c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010190838s
    Sep  5 14:40:16.177: INFO: Pod "pod-3c31d8c3-c92e-439e-ab16-910dd39ef85c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016711856s
    STEP: Saw pod success
    Sep  5 14:40:16.177: INFO: Pod "pod-3c31d8c3-c92e-439e-ab16-910dd39ef85c" satisfied condition "Succeeded or Failed"

    Sep  5 14:40:16.182: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod pod-3c31d8c3-c92e-439e-ab16-910dd39ef85c container test-container: <nil>
    STEP: delete the pod
    Sep  5 14:40:16.202: INFO: Waiting for pod pod-3c31d8c3-c92e-439e-ab16-910dd39ef85c to disappear
    Sep  5 14:40:16.207: INFO: Pod pod-3c31d8c3-c92e-439e-ab16-910dd39ef85c no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:40:16.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-1213" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":856,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:40:19.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-6293" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":27,"skipped":569,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-scheduling] LimitRange
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:40:26.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "limitrange-1513" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":28,"skipped":572,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
    • [SLOW TEST:312.168 seconds]
    [sig-apps] CronJob
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":28,"skipped":462,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:41:04.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-8288" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":29,"skipped":482,"failed":0}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's cpu request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 14:41:04.981: INFO: Waiting up to 5m0s for pod "downwardapi-volume-293394a9-9a68-4cd7-8a92-58ce8d1b6aab" in namespace "downward-api-9642" to be "Succeeded or Failed"

    Sep  5 14:41:04.987: INFO: Pod "downwardapi-volume-293394a9-9a68-4cd7-8a92-58ce8d1b6aab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095319ms
    Sep  5 14:41:06.997: INFO: Pod "downwardapi-volume-293394a9-9a68-4cd7-8a92-58ce8d1b6aab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015597159s
    Sep  5 14:41:09.005: INFO: Pod "downwardapi-volume-293394a9-9a68-4cd7-8a92-58ce8d1b6aab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023771937s
    STEP: Saw pod success
    Sep  5 14:41:09.005: INFO: Pod "downwardapi-volume-293394a9-9a68-4cd7-8a92-58ce8d1b6aab" satisfied condition "Succeeded or Failed"

    Sep  5 14:41:09.010: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-9oo03u pod downwardapi-volume-293394a9-9a68-4cd7-8a92-58ce8d1b6aab container client-container: <nil>
    STEP: delete the pod
    Sep  5 14:41:09.041: INFO: Waiting for pod downwardapi-volume-293394a9-9a68-4cd7-8a92-58ce8d1b6aab to disappear
    Sep  5 14:41:09.045: INFO: Pod downwardapi-volume-293394a9-9a68-4cd7-8a92-58ce8d1b6aab no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:41:09.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-9642" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":500,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:41:14.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-1059" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":31,"skipped":506,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:41:14.236: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep  5 14:41:14.328: INFO: Waiting up to 5m0s for pod "pod-bd4ffd6e-1016-444b-9cdf-462b517be370" in namespace "emptydir-2095" to be "Succeeded or Failed"

    Sep  5 14:41:14.337: INFO: Pod "pod-bd4ffd6e-1016-444b-9cdf-462b517be370": Phase="Pending", Reason="", readiness=false. Elapsed: 9.557129ms
    Sep  5 14:41:16.345: INFO: Pod "pod-bd4ffd6e-1016-444b-9cdf-462b517be370": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017133016s
    Sep  5 14:41:18.352: INFO: Pod "pod-bd4ffd6e-1016-444b-9cdf-462b517be370": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024232306s
    STEP: Saw pod success
    Sep  5 14:41:18.352: INFO: Pod "pod-bd4ffd6e-1016-444b-9cdf-462b517be370" satisfied condition "Succeeded or Failed"

    Sep  5 14:41:18.358: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod pod-bd4ffd6e-1016-444b-9cdf-462b517be370 container test-container: <nil>
    STEP: delete the pod
    Sep  5 14:41:18.387: INFO: Waiting for pod pod-bd4ffd6e-1016-444b-9cdf-462b517be370 to disappear
    Sep  5 14:41:18.392: INFO: Pod pod-bd4ffd6e-1016-444b-9cdf-462b517be370 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:41:18.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-2095" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":513,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:41:29.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-watch-7414" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":29,"skipped":601,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:41:31.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-3147" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":30,"skipped":602,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] HostPort
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:41:32.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "hostport-2420" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":33,"skipped":552,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:41:31.884: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-ae47f7d8-08e1-49ef-a923-9a9550ac50d9
    STEP: Creating a pod to test consume configMaps
    Sep  5 14:41:31.987: INFO: Waiting up to 5m0s for pod "pod-configmaps-a025fa86-b265-4aa3-95f0-fb799789f549" in namespace "configmap-6912" to be "Succeeded or Failed"

    Sep  5 14:41:32.007: INFO: Pod "pod-configmaps-a025fa86-b265-4aa3-95f0-fb799789f549": Phase="Pending", Reason="", readiness=false. Elapsed: 20.023697ms
    Sep  5 14:41:34.018: INFO: Pod "pod-configmaps-a025fa86-b265-4aa3-95f0-fb799789f549": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030372678s
    Sep  5 14:41:36.023: INFO: Pod "pod-configmaps-a025fa86-b265-4aa3-95f0-fb799789f549": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035525393s
    STEP: Saw pod success
    Sep  5 14:41:36.023: INFO: Pod "pod-configmaps-a025fa86-b265-4aa3-95f0-fb799789f549" satisfied condition "Succeeded or Failed"

    Sep  5 14:41:36.029: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd pod pod-configmaps-a025fa86-b265-4aa3-95f0-fb799789f549 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 14:41:36.058: INFO: Waiting for pod pod-configmaps-a025fa86-b265-4aa3-95f0-fb799789f549 to disappear
    Sep  5 14:41:36.062: INFO: Pod pod-configmaps-a025fa86-b265-4aa3-95f0-fb799789f549 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:41:36.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-6912" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":627,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] KubeletManagedEtcHosts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 48 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:41:43.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "e2e-kubelet-etc-hosts-3739" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":646,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:41:43.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-5541" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":33,"skipped":678,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:41:49.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-3830" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":684,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir wrapper volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:41:51.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-wrapper-8084" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":35,"skipped":692,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:42:08.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-3019" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":34,"skipped":562,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-configmap-bcxt
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  5 14:42:09.012: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bcxt" in namespace "subpath-1111" to be "Succeeded or Failed"

    Sep  5 14:42:09.020: INFO: Pod "pod-subpath-test-configmap-bcxt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.402212ms
    Sep  5 14:42:11.025: INFO: Pod "pod-subpath-test-configmap-bcxt": Phase="Running", Reason="", readiness=true. Elapsed: 2.011278451s
    Sep  5 14:42:13.030: INFO: Pod "pod-subpath-test-configmap-bcxt": Phase="Running", Reason="", readiness=true. Elapsed: 4.016692313s
    Sep  5 14:42:15.036: INFO: Pod "pod-subpath-test-configmap-bcxt": Phase="Running", Reason="", readiness=true. Elapsed: 6.022660356s
    Sep  5 14:42:17.042: INFO: Pod "pod-subpath-test-configmap-bcxt": Phase="Running", Reason="", readiness=true. Elapsed: 8.029030841s
    Sep  5 14:42:19.048: INFO: Pod "pod-subpath-test-configmap-bcxt": Phase="Running", Reason="", readiness=true. Elapsed: 10.034735373s
... skipping 2 lines ...
    Sep  5 14:42:25.067: INFO: Pod "pod-subpath-test-configmap-bcxt": Phase="Running", Reason="", readiness=true. Elapsed: 16.053984217s
    Sep  5 14:42:27.075: INFO: Pod "pod-subpath-test-configmap-bcxt": Phase="Running", Reason="", readiness=true. Elapsed: 18.061346842s
    Sep  5 14:42:29.080: INFO: Pod "pod-subpath-test-configmap-bcxt": Phase="Running", Reason="", readiness=true. Elapsed: 20.066434565s
    Sep  5 14:42:31.086: INFO: Pod "pod-subpath-test-configmap-bcxt": Phase="Running", Reason="", readiness=false. Elapsed: 22.072825715s
    Sep  5 14:42:33.092: INFO: Pod "pod-subpath-test-configmap-bcxt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.078705682s
    STEP: Saw pod success
    Sep  5 14:42:33.092: INFO: Pod "pod-subpath-test-configmap-bcxt" satisfied condition "Succeeded or Failed"

    Sep  5 14:42:33.097: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod pod-subpath-test-configmap-bcxt container test-container-subpath-configmap-bcxt: <nil>
    STEP: delete the pod
    Sep  5 14:42:33.116: INFO: Waiting for pod pod-subpath-test-configmap-bcxt to disappear
    Sep  5 14:42:33.120: INFO: Pod pod-subpath-test-configmap-bcxt no longer exists
    STEP: Deleting pod pod-subpath-test-configmap-bcxt
    Sep  5 14:42:33.120: INFO: Deleting pod "pod-subpath-test-configmap-bcxt" in namespace "subpath-1111"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:42:33.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-1111" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":35,"skipped":583,"failed":0}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Deploying the webhook service
    STEP: Verifying the service has paired with the endpoint
    Sep  5 14:41:55.285: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
    [It] should be able to convert from CR v1 to CR v2 [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  5 14:41:55.291: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:42:07.865: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-649-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-5071.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep  5 14:42:17.972: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-649-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-5071.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep  5 14:42:28.073: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-649-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-5071.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep  5 14:42:38.177: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-649-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-5071.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep  5 14:42:48.186: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-649-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-5071.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep  5 14:42:48.187: FAIL: Unexpected error:

        <*errors.errorString | 0xc000244280>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    • Failure [57.177 seconds]
    [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should be able to convert from CR v1 to CR v2 [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 14:42:48.187: Unexpected error:

          <*errors.errorString | 0xc000244280>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:499
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":35,"skipped":694,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:42:48.787: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename crd-webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
    STEP: Destroying namespace "crd-webhook-4440" for this suite.
    [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":36,"skipped":694,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:42:56.061: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
    Sep  5 14:42:56.142: INFO: Waiting up to 5m0s for pod "security-context-e3a9eec0-2103-4dfc-802f-5743be6b417a" in namespace "security-context-6450" to be "Succeeded or Failed"

    Sep  5 14:42:56.147: INFO: Pod "security-context-e3a9eec0-2103-4dfc-802f-5743be6b417a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.515386ms
    Sep  5 14:42:58.153: INFO: Pod "security-context-e3a9eec0-2103-4dfc-802f-5743be6b417a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011069827s
    Sep  5 14:43:00.160: INFO: Pod "security-context-e3a9eec0-2103-4dfc-802f-5743be6b417a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017189444s
    STEP: Saw pod success
    Sep  5 14:43:00.160: INFO: Pod "security-context-e3a9eec0-2103-4dfc-802f-5743be6b417a" satisfied condition "Succeeded or Failed"

    Sep  5 14:43:00.163: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod security-context-e3a9eec0-2103-4dfc-802f-5743be6b417a container test-container: <nil>
    STEP: delete the pod
    Sep  5 14:43:00.182: INFO: Waiting for pod security-context-e3a9eec0-2103-4dfc-802f-5743be6b417a to disappear
    Sep  5 14:43:00.186: INFO: Pod security-context-e3a9eec0-2103-4dfc-802f-5743be6b417a no longer exists
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:43:00.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-6450" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":37,"skipped":717,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:43:25.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-7105" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":38,"skipped":724,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:43:29.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-1809" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":39,"skipped":725,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:43:33.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-1057" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":735,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Ingress API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:43:33.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "ingress-418" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":41,"skipped":783,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
    STEP: Deploying the webhook pod
    STEP: Wait for the deployment to be ready
    Sep  5 14:43:34.467: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
    STEP: Deploying the webhook service
    STEP: Verifying the service has paired with the endpoint
    Sep  5 14:43:37.502: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] should unconditionally reject operations on fail closed webhook [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API

    STEP: create a namespace for the webhook
    STEP: create a configmap should be unconditionally rejected by the webhook
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:43:37.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "webhook-6456" for this suite.
    STEP: Destroying namespace "webhook-6456-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":42,"skipped":800,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:43:40.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-7552" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":811,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:242.827 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":885,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:44:19.150: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  5 14:44:19.206: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-c71a0034-6e50-41be-8263-3acafe4cc9f7" in namespace "security-context-test-8410" to be "Succeeded or Failed"

    Sep  5 14:44:19.214: INFO: Pod "alpine-nnp-false-c71a0034-6e50-41be-8263-3acafe4cc9f7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.947451ms
    Sep  5 14:44:21.264: INFO: Pod "alpine-nnp-false-c71a0034-6e50-41be-8263-3acafe4cc9f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057935503s
    Sep  5 14:44:23.270: INFO: Pod "alpine-nnp-false-c71a0034-6e50-41be-8263-3acafe4cc9f7": Phase="Running", Reason="", readiness=false. Elapsed: 4.063429952s
    Sep  5 14:44:25.276: INFO: Pod "alpine-nnp-false-c71a0034-6e50-41be-8263-3acafe4cc9f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069630056s
    Sep  5 14:44:25.276: INFO: Pod "alpine-nnp-false-c71a0034-6e50-41be-8263-3acafe4cc9f7" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:44:25.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-8410" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":898,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 49 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:45:11.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-1565" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":44,"skipped":873,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods Extended
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
    STEP: Creating a kubernetes client
    Sep  5 14:44:25.402: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename init-container
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
    [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod
    Sep  5 14:44:25.455: INFO: PodSpec: initContainers in spec.initContainers
    Sep  5 14:45:15.671: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-22dd73f0-cf02-45f9-8dc0-5e5975235f8c", GenerateName:"", Namespace:"init-container-1920", SelfLink:"", UID:"bdc0cbac-4ca2-4791-9fa8-4bad8eee1510", ResourceVersion:"12287", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63797985865, loc:(*time.Location)(0xa04a040)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"455302597"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004180900), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004180918), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004180930), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004180948), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-d5sgj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0044be700), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-d5sgj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-d5sgj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-d5sgj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004a6e4b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-9jgcx", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003d36fc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004a6e530)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004a6e550)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004a6e558), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004a6e55c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002f82ce0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797985865, loc:(*time.Location)(0xa04a040)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797985865, loc:(*time.Location)(0xa04a040)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797985865, loc:(*time.Location)(0xa04a040)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797985865, loc:(*time.Location)(0xa04a040)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.7", PodIP:"192.168.1.37", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.1.37"}}, StartTime:(*v1.Time)(0xc004180978), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003d370a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003d37110)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://8c59c396f099fc1af17cf33fddb5b2867693ee0dc857e24874863919e1a6c1b3", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0044be780), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0044be760), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.5", ImageID:"", ContainerID:"", Started:(*bool)(0xc004a6e5df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}

    [AfterEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:45:15.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-1920" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":41,"skipped":932,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":45,"skipped":893,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:45:11.792: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 14:45:11.852: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84a0f3af-bc1f-456e-909f-b98eabbcec67" in namespace "projected-9452" to be "Succeeded or Failed"

    Sep  5 14:45:11.858: INFO: Pod "downwardapi-volume-84a0f3af-bc1f-456e-909f-b98eabbcec67": Phase="Pending", Reason="", readiness=false. Elapsed: 5.056908ms
    Sep  5 14:45:13.864: INFO: Pod "downwardapi-volume-84a0f3af-bc1f-456e-909f-b98eabbcec67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011701058s
    Sep  5 14:45:15.870: INFO: Pod "downwardapi-volume-84a0f3af-bc1f-456e-909f-b98eabbcec67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017657973s
    STEP: Saw pod success
    Sep  5 14:45:15.870: INFO: Pod "downwardapi-volume-84a0f3af-bc1f-456e-909f-b98eabbcec67" satisfied condition "Succeeded or Failed"

    Sep  5 14:45:15.879: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod downwardapi-volume-84a0f3af-bc1f-456e-909f-b98eabbcec67 container client-container: <nil>
    STEP: delete the pod
    Sep  5 14:45:15.899: INFO: Waiting for pod downwardapi-volume-84a0f3af-bc1f-456e-909f-b98eabbcec67 to disappear
    Sep  5 14:45:15.904: INFO: Pod downwardapi-volume-84a0f3af-bc1f-456e-909f-b98eabbcec67 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:45:15.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9452" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":893,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:45:18.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-752" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":42,"skipped":943,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 14:45:16.064: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25a32d00-2b20-49dc-a4d2-8bddc5401c8d" in namespace "downward-api-713" to be "Succeeded or Failed"

    Sep  5 14:45:16.068: INFO: Pod "downwardapi-volume-25a32d00-2b20-49dc-a4d2-8bddc5401c8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.65394ms
    Sep  5 14:45:18.075: INFO: Pod "downwardapi-volume-25a32d00-2b20-49dc-a4d2-8bddc5401c8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011631328s
    Sep  5 14:45:20.082: INFO: Pod "downwardapi-volume-25a32d00-2b20-49dc-a4d2-8bddc5401c8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01796039s
    STEP: Saw pod success
    Sep  5 14:45:20.082: INFO: Pod "downwardapi-volume-25a32d00-2b20-49dc-a4d2-8bddc5401c8d" satisfied condition "Succeeded or Failed"

    Sep  5 14:45:20.086: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd pod downwardapi-volume-25a32d00-2b20-49dc-a4d2-8bddc5401c8d container client-container: <nil>
    STEP: delete the pod
    Sep  5 14:45:20.123: INFO: Waiting for pod downwardapi-volume-25a32d00-2b20-49dc-a4d2-8bddc5401c8d to disappear
    Sep  5 14:45:20.127: INFO: Pod downwardapi-volume-25a32d00-2b20-49dc-a4d2-8bddc5401c8d no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:45:20.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-713" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":930,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
    Sep  5 14:45:37.079: INFO: Pod pod-with-prestop-exec-hook still exists
    Sep  5 14:45:39.073: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
    Sep  5 14:45:39.080: INFO: Pod pod-with-prestop-exec-hook still exists
    Sep  5 14:45:41.072: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
    Sep  5 14:45:41.077: INFO: Pod pod-with-prestop-exec-hook no longer exists
    STEP: check prestop hook
    Sep  5 14:46:11.079: FAIL: Timed out after 30.001s.

    Expected
        <*errors.errorString | 0xc00304b340>: {
            s: "failed to match regexp \"GET /echo\\\\?msg=prestop\" in output \"I0905 14:45:19.755116       1 log.go:195] Started HTTP server on port 8080\\nI0905 14:45:19.756432       1 log.go:195] Started UDP server on port  8081\\n\"",

        }
    to be nil
    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/node.glob..func11.1.2(0xc001d8d400)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:79 +0x342
... skipping 21 lines ...
        should execute prestop exec hook properly [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep  5 14:46:11.079: Timed out after 30.001s.
        Expected
            <*errors.errorString | 0xc00304b340>: {
                s: "failed to match regexp \"GET /echo\\\\?msg=prestop\" in output \"I0905 14:45:19.755116       1 log.go:195] Started HTTP server on port 8080\\nI0905 14:45:19.756432       1 log.go:195] Started UDP server on port  8081\\n\"",

            }
        to be nil
    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:79
    ------------------------------
    {"msg":"FAILED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":949,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:46:11.097: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-lifecycle-hook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:46:19.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-6996" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":949,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PreStop
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 225 lines ...
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
    	],
    	"StillContactingPeers": true
    }
    Sep  5 14:46:24.311: FAIL: validating pre-stop.

    Unexpected error:

        <*errors.errorString | 0xc000244280>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    [sig-node] PreStop
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
      should call prestop when killing a pod  [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 14:46:24.311: validating pre-stop.
      Unexpected error:

          <*errors.errorString | 0xc000244280>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:46:30.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-2010" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":44,"skipped":978,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:46:30.877: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep  5 14:46:30.941: INFO: Waiting up to 5m0s for pod "pod-92af809a-a25d-482d-9369-514bab8dbead" in namespace "emptydir-3240" to be "Succeeded or Failed"

    Sep  5 14:46:30.949: INFO: Pod "pod-92af809a-a25d-482d-9369-514bab8dbead": Phase="Pending", Reason="", readiness=false. Elapsed: 7.923435ms
    Sep  5 14:46:32.955: INFO: Pod "pod-92af809a-a25d-482d-9369-514bab8dbead": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014513835s
    Sep  5 14:46:34.961: INFO: Pod "pod-92af809a-a25d-482d-9369-514bab8dbead": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020288958s
    STEP: Saw pod success
    Sep  5 14:46:34.961: INFO: Pod "pod-92af809a-a25d-482d-9369-514bab8dbead" satisfied condition "Succeeded or Failed"

    Sep  5 14:46:34.966: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod pod-92af809a-a25d-482d-9369-514bab8dbead container test-container: <nil>
    STEP: delete the pod
    Sep  5 14:46:34.987: INFO: Waiting for pod pod-92af809a-a25d-482d-9369-514bab8dbead to disappear
    Sep  5 14:46:34.994: INFO: Pod pod-92af809a-a25d-482d-9369-514bab8dbead no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:46:34.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-3240" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":1048,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 51 lines ...
    STEP: Destroying namespace "services-1727" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":46,"skipped":1052,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 54 lines ...
    STEP: Destroying namespace "services-8567" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":47,"skipped":1053,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "services-3162" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":48,"skipped":1066,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:47:21.395: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-c1a51673-465f-4b23-85ff-bac80ec63adf
    STEP: Creating a pod to test consume secrets
    Sep  5 14:47:21.486: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0eb7567d-eef4-47ea-b5b1-4ffc33e49885" in namespace "projected-7413" to be "Succeeded or Failed"

    Sep  5 14:47:21.493: INFO: Pod "pod-projected-secrets-0eb7567d-eef4-47ea-b5b1-4ffc33e49885": Phase="Pending", Reason="", readiness=false. Elapsed: 6.856814ms
    Sep  5 14:47:23.498: INFO: Pod "pod-projected-secrets-0eb7567d-eef4-47ea-b5b1-4ffc33e49885": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011858373s
    Sep  5 14:47:25.503: INFO: Pod "pod-projected-secrets-0eb7567d-eef4-47ea-b5b1-4ffc33e49885": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016432474s
    STEP: Saw pod success
    Sep  5 14:47:25.503: INFO: Pod "pod-projected-secrets-0eb7567d-eef4-47ea-b5b1-4ffc33e49885" satisfied condition "Succeeded or Failed"

    Sep  5 14:47:25.506: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod pod-projected-secrets-0eb7567d-eef4-47ea-b5b1-4ffc33e49885 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 14:47:25.525: INFO: Waiting for pod pod-projected-secrets-0eb7567d-eef4-47ea-b5b1-4ffc33e49885 to disappear
    Sep  5 14:47:25.531: INFO: Pod pod-projected-secrets-0eb7567d-eef4-47ea-b5b1-4ffc33e49885 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:47:25.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7413" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":1072,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":47,"skipped":955,"failed":2,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    [BeforeEach] [sig-node] PreStop
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:46:24.353: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename prestop
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 222 lines ...
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
    	],
    	"StillContactingPeers": true
    }
    Sep  5 14:47:28.534: FAIL: validating pre-stop.

    Unexpected error:

        <*errors.errorString | 0xc000244280>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    [sig-node] PreStop
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
      should call prestop when killing a pod  [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 14:47:28.534: validating pre-stop.
      Unexpected error:

          <*errors.errorString | 0xc000244280>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 6 lines ...
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-eee5f995-a2e7-442f-b83d-d6687456282b
    STEP: Creating a pod to test consume secrets
    Sep  5 14:47:25.616: INFO: Waiting up to 5m0s for pod "pod-secrets-25c3558d-c495-4ba9-b627-8cb8cd775250" in namespace "secrets-4962" to be "Succeeded or Failed"

    Sep  5 14:47:25.621: INFO: Pod "pod-secrets-25c3558d-c495-4ba9-b627-8cb8cd775250": Phase="Pending", Reason="", readiness=false. Elapsed: 4.789142ms
    Sep  5 14:47:27.628: INFO: Pod "pod-secrets-25c3558d-c495-4ba9-b627-8cb8cd775250": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011454535s
    Sep  5 14:47:29.635: INFO: Pod "pod-secrets-25c3558d-c495-4ba9-b627-8cb8cd775250": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018727023s
    STEP: Saw pod success
    Sep  5 14:47:29.635: INFO: Pod "pod-secrets-25c3558d-c495-4ba9-b627-8cb8cd775250" satisfied condition "Succeeded or Failed"

    Sep  5 14:47:29.640: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod pod-secrets-25c3558d-c495-4ba9-b627-8cb8cd775250 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 14:47:29.665: INFO: Waiting for pod pod-secrets-25c3558d-c495-4ba9-b627-8cb8cd775250 to disappear
    Sep  5 14:47:29.668: INFO: Pod pod-secrets-25c3558d-c495-4ba9-b627-8cb8cd775250 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:47:29.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-4962" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":1080,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    Sep  5 14:47:39.811: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:39.817: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:39.834: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:39.840: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:39.846: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:39.852: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:39.865: INFO: Lookups using dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8277.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8277.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local jessie_udp@dns-test-service-2.dns-8277.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8277.svc.cluster.local]

    
    Sep  5 14:47:44.874: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:44.878: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:44.884: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:44.890: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:44.907: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:44.912: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:44.918: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:44.924: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:44.935: INFO: Lookups using dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8277.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8277.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local jessie_udp@dns-test-service-2.dns-8277.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8277.svc.cluster.local]

    
    Sep  5 14:47:49.872: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:49.877: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:49.882: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:49.887: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:49.903: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:49.907: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:49.913: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:49.918: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:49.930: INFO: Lookups using dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8277.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8277.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local jessie_udp@dns-test-service-2.dns-8277.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8277.svc.cluster.local]

    
    Sep  5 14:47:54.872: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:54.877: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:54.882: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:54.887: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:54.905: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:54.911: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:54.916: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:54.921: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:54.933: INFO: Lookups using dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8277.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8277.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local jessie_udp@dns-test-service-2.dns-8277.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8277.svc.cluster.local]

    
    Sep  5 14:47:59.870: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:59.875: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:59.879: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:59.883: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:59.896: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:59.900: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:59.905: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:59.910: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8277.svc.cluster.local from pod dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248: the server could not find the requested resource (get pods dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248)
    Sep  5 14:47:59.919: INFO: Lookups using dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8277.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8277.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8277.svc.cluster.local jessie_udp@dns-test-service-2.dns-8277.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8277.svc.cluster.local]

    
    Sep  5 14:48:04.937: INFO: DNS probes using dns-8277/dns-test-6589efe0-81f8-4d49-b81c-14fbd1ee1248 succeeded
    
    STEP: deleting the pod
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:48:04.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-8277" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":51,"skipped":1087,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:48:05.005: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename replication-controller
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:48:15.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-7644" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":52,"skipped":1087,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:48:15.223: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-7474aef0-5b4e-4729-a89a-3e84387b7082
    STEP: Creating a pod to test consume configMaps
    Sep  5 14:48:15.277: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-918bd66b-e7ab-46fd-a235-2cef2381dd68" in namespace "projected-9710" to be "Succeeded or Failed"

    Sep  5 14:48:15.282: INFO: Pod "pod-projected-configmaps-918bd66b-e7ab-46fd-a235-2cef2381dd68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.552121ms
    Sep  5 14:48:17.290: INFO: Pod "pod-projected-configmaps-918bd66b-e7ab-46fd-a235-2cef2381dd68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012267941s
    Sep  5 14:48:19.296: INFO: Pod "pod-projected-configmaps-918bd66b-e7ab-46fd-a235-2cef2381dd68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01850125s
    STEP: Saw pod success
    Sep  5 14:48:19.296: INFO: Pod "pod-projected-configmaps-918bd66b-e7ab-46fd-a235-2cef2381dd68" satisfied condition "Succeeded or Failed"

    Sep  5 14:48:19.301: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-9oo03u pod pod-projected-configmaps-918bd66b-e7ab-46fd-a235-2cef2381dd68 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 14:48:19.331: INFO: Waiting for pod pod-projected-configmaps-918bd66b-e7ab-46fd-a235-2cef2381dd68 to disappear
    Sep  5 14:48:19.336: INFO: Pod pod-projected-configmaps-918bd66b-e7ab-46fd-a235-2cef2381dd68 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:48:19.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9710" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":1087,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:48:19.384: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide host IP as an env var [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  5 14:48:19.430: INFO: Waiting up to 5m0s for pod "downward-api-bd0a068d-3341-44ef-99a8-346b8b1662b2" in namespace "downward-api-8756" to be "Succeeded or Failed"

    Sep  5 14:48:19.435: INFO: Pod "downward-api-bd0a068d-3341-44ef-99a8-346b8b1662b2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.405734ms
    Sep  5 14:48:21.441: INFO: Pod "downward-api-bd0a068d-3341-44ef-99a8-346b8b1662b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011432899s
    Sep  5 14:48:23.448: INFO: Pod "downward-api-bd0a068d-3341-44ef-99a8-346b8b1662b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018186362s
    STEP: Saw pod success
    Sep  5 14:48:23.448: INFO: Pod "downward-api-bd0a068d-3341-44ef-99a8-346b8b1662b2" satisfied condition "Succeeded or Failed"

    Sep  5 14:48:23.452: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-9oo03u pod downward-api-bd0a068d-3341-44ef-99a8-346b8b1662b2 container dapi-container: <nil>
    STEP: delete the pod
    Sep  5 14:48:23.475: INFO: Waiting for pod downward-api-bd0a068d-3341-44ef-99a8-346b8b1662b2 to disappear
    Sep  5 14:48:23.479: INFO: Pod downward-api-bd0a068d-3341-44ef-99a8-346b8b1662b2 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:48:23.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-8756" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":1097,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:48:23.565: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename init-container
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
    [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod
    Sep  5 14:48:23.605: INFO: PodSpec: initContainers in spec.initContainers
    [AfterEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:48:29.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-3835" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":55,"skipped":1129,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":47,"skipped":955,"failed":3,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    [BeforeEach] [sig-node] PreStop
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:47:28.578: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename prestop
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 222 lines ...
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
    	],
    	"StillContactingPeers": true
    }
    Sep  5 14:48:32.706: FAIL: validating pre-stop.

    Unexpected error:

        <*errors.errorString | 0xc000244280>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    [sig-node] PreStop
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
      should call prestop when killing a pod  [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 14:48:32.706: validating pre-stop.
      Unexpected error:

          <*errors.errorString | 0xc000244280>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:151
    ------------------------------
    {"msg":"FAILED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":47,"skipped":955,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:48:32.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-6093" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":48,"skipped":975,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-1057-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":56,"skipped":1144,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:48:34.088: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-map-3a7ff1df-9647-4637-99de-7e447ec15470
    STEP: Creating a pod to test consume secrets
    Sep  5 14:48:34.142: INFO: Waiting up to 5m0s for pod "pod-secrets-e98bd215-c5ea-4cb0-a53e-6c799d999146" in namespace "secrets-6679" to be "Succeeded or Failed"

    Sep  5 14:48:34.146: INFO: Pod "pod-secrets-e98bd215-c5ea-4cb0-a53e-6c799d999146": Phase="Pending", Reason="", readiness=false. Elapsed: 3.936737ms
    Sep  5 14:48:36.153: INFO: Pod "pod-secrets-e98bd215-c5ea-4cb0-a53e-6c799d999146": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010307439s
    Sep  5 14:48:38.160: INFO: Pod "pod-secrets-e98bd215-c5ea-4cb0-a53e-6c799d999146": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017409999s
    STEP: Saw pod success
    Sep  5 14:48:38.160: INFO: Pod "pod-secrets-e98bd215-c5ea-4cb0-a53e-6c799d999146" satisfied condition "Succeeded or Failed"

    Sep  5 14:48:38.166: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod pod-secrets-e98bd215-c5ea-4cb0-a53e-6c799d999146 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 14:48:38.199: INFO: Waiting for pod pod-secrets-e98bd215-c5ea-4cb0-a53e-6c799d999146 to disappear
    Sep  5 14:48:38.204: INFO: Pod pod-secrets-e98bd215-c5ea-4cb0-a53e-6c799d999146 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:48:38.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-6679" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":1182,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with secret pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-secret-kbz4
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  5 14:48:33.118: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-kbz4" in namespace "subpath-7186" to be "Succeeded or Failed"

    Sep  5 14:48:33.124: INFO: Pod "pod-subpath-test-secret-kbz4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.386125ms
    Sep  5 14:48:35.135: INFO: Pod "pod-subpath-test-secret-kbz4": Phase="Running", Reason="", readiness=true. Elapsed: 2.016937529s
    Sep  5 14:48:37.142: INFO: Pod "pod-subpath-test-secret-kbz4": Phase="Running", Reason="", readiness=true. Elapsed: 4.023677093s
    Sep  5 14:48:39.151: INFO: Pod "pod-subpath-test-secret-kbz4": Phase="Running", Reason="", readiness=true. Elapsed: 6.033001403s
    Sep  5 14:48:41.157: INFO: Pod "pod-subpath-test-secret-kbz4": Phase="Running", Reason="", readiness=true. Elapsed: 8.039117958s
    Sep  5 14:48:43.164: INFO: Pod "pod-subpath-test-secret-kbz4": Phase="Running", Reason="", readiness=true. Elapsed: 10.045468794s
... skipping 2 lines ...
    Sep  5 14:48:49.182: INFO: Pod "pod-subpath-test-secret-kbz4": Phase="Running", Reason="", readiness=true. Elapsed: 16.063456741s
    Sep  5 14:48:51.188: INFO: Pod "pod-subpath-test-secret-kbz4": Phase="Running", Reason="", readiness=true. Elapsed: 18.069429711s
    Sep  5 14:48:53.194: INFO: Pod "pod-subpath-test-secret-kbz4": Phase="Running", Reason="", readiness=true. Elapsed: 20.075669191s
    Sep  5 14:48:55.200: INFO: Pod "pod-subpath-test-secret-kbz4": Phase="Running", Reason="", readiness=false. Elapsed: 22.082069896s
    Sep  5 14:48:57.206: INFO: Pod "pod-subpath-test-secret-kbz4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.088310769s
    STEP: Saw pod success
    Sep  5 14:48:57.207: INFO: Pod "pod-subpath-test-secret-kbz4" satisfied condition "Succeeded or Failed"

    Sep  5 14:48:57.211: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod pod-subpath-test-secret-kbz4 container test-container-subpath-secret-kbz4: <nil>
    STEP: delete the pod
    Sep  5 14:48:57.237: INFO: Waiting for pod pod-subpath-test-secret-kbz4 to disappear
    Sep  5 14:48:57.246: INFO: Pod pod-subpath-test-secret-kbz4 no longer exists
    STEP: Deleting pod pod-subpath-test-secret-kbz4
    Sep  5 14:48:57.246: INFO: Deleting pod "pod-subpath-test-secret-kbz4" in namespace "subpath-7186"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:48:57.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-7186" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":49,"skipped":977,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:49:14.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-4906" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":50,"skipped":1009,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  5 14:48:46.462: INFO: File wheezy_udp@dns-test-service-3.dns-7828.svc.cluster.local from pod  dns-7828/dns-test-8a05db25-daf1-4a26-be0d-3cbaad43fe60 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep  5 14:48:46.467: INFO: File jessie_udp@dns-test-service-3.dns-7828.svc.cluster.local from pod  dns-7828/dns-test-8a05db25-daf1-4a26-be0d-3cbaad43fe60 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep  5 14:48:46.468: INFO: Lookups using dns-7828/dns-test-8a05db25-daf1-4a26-be0d-3cbaad43fe60 failed for: [wheezy_udp@dns-test-service-3.dns-7828.svc.cluster.local jessie_udp@dns-test-service-3.dns-7828.svc.cluster.local]

    
    Sep  5 14:48:51.477: INFO: File wheezy_udp@dns-test-service-3.dns-7828.svc.cluster.local from pod  dns-7828/dns-test-8a05db25-daf1-4a26-be0d-3cbaad43fe60 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep  5 14:48:51.483: INFO: File jessie_udp@dns-test-service-3.dns-7828.svc.cluster.local from pod  dns-7828/dns-test-8a05db25-daf1-4a26-be0d-3cbaad43fe60 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep  5 14:48:51.483: INFO: Lookups using dns-7828/dns-test-8a05db25-daf1-4a26-be0d-3cbaad43fe60 failed for: [wheezy_udp@dns-test-service-3.dns-7828.svc.cluster.local jessie_udp@dns-test-service-3.dns-7828.svc.cluster.local]

    
    Sep  5 14:48:56.473: INFO: File wheezy_udp@dns-test-service-3.dns-7828.svc.cluster.local from pod  dns-7828/dns-test-8a05db25-daf1-4a26-be0d-3cbaad43fe60 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep  5 14:48:56.480: INFO: File jessie_udp@dns-test-service-3.dns-7828.svc.cluster.local from pod  dns-7828/dns-test-8a05db25-daf1-4a26-be0d-3cbaad43fe60 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep  5 14:48:56.480: INFO: Lookups using dns-7828/dns-test-8a05db25-daf1-4a26-be0d-3cbaad43fe60 failed for: [wheezy_udp@dns-test-service-3.dns-7828.svc.cluster.local jessie_udp@dns-test-service-3.dns-7828.svc.cluster.local]

    
    Sep  5 14:49:01.476: INFO: File wheezy_udp@dns-test-service-3.dns-7828.svc.cluster.local from pod  dns-7828/dns-test-8a05db25-daf1-4a26-be0d-3cbaad43fe60 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep  5 14:49:01.483: INFO: File jessie_udp@dns-test-service-3.dns-7828.svc.cluster.local from pod  dns-7828/dns-test-8a05db25-daf1-4a26-be0d-3cbaad43fe60 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep  5 14:49:01.483: INFO: Lookups using dns-7828/dns-test-8a05db25-daf1-4a26-be0d-3cbaad43fe60 failed for: [wheezy_udp@dns-test-service-3.dns-7828.svc.cluster.local jessie_udp@dns-test-service-3.dns-7828.svc.cluster.local]

    
    Sep  5 14:49:06.482: INFO: File wheezy_udp@dns-test-service-3.dns-7828.svc.cluster.local from pod  dns-7828/dns-test-8a05db25-daf1-4a26-be0d-3cbaad43fe60 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep  5 14:49:06.492: INFO: File jessie_udp@dns-test-service-3.dns-7828.svc.cluster.local from pod  dns-7828/dns-test-8a05db25-daf1-4a26-be0d-3cbaad43fe60 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep  5 14:49:06.492: INFO: Lookups using dns-7828/dns-test-8a05db25-daf1-4a26-be0d-3cbaad43fe60 failed for: [wheezy_udp@dns-test-service-3.dns-7828.svc.cluster.local jessie_udp@dns-test-service-3.dns-7828.svc.cluster.local]

    
    Sep  5 14:49:11.478: INFO: File wheezy_udp@dns-test-service-3.dns-7828.svc.cluster.local from pod  dns-7828/dns-test-8a05db25-daf1-4a26-be0d-3cbaad43fe60 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep  5 14:49:11.487: INFO: Lookups using dns-7828/dns-test-8a05db25-daf1-4a26-be0d-3cbaad43fe60 failed for: [wheezy_udp@dns-test-service-3.dns-7828.svc.cluster.local]

    
    Sep  5 14:49:16.484: INFO: DNS probes using dns-test-8a05db25-daf1-4a26-be0d-3cbaad43fe60 succeeded
    
    STEP: deleting the pod
    STEP: changing the service to type=ClusterIP
    STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7828.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7828.svc.cluster.local; sleep 1; done
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:49:20.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-7828" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":58,"skipped":1192,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:49:41.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-1591" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":-1,"completed":59,"skipped":1214,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:49:45.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-3808" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":60,"skipped":1218,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 35 lines ...
    Sep  5 14:36:57.630: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.6.23:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:36:57.630: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:36:57.817: INFO: Found all 1 expected endpoints: [netserver-2]
    Sep  5 14:36:57.817: INFO: Going to poll 192.168.2.31 on port 8083 at least 0 times, with a maximum of 46 tries before failing
    Sep  5 14:36:57.826: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:36:57.826: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:37:12.998: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:37:12.998: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:37:15.005: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:37:15.005: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:37:30.172: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:37:30.172: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:37:32.181: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:37:32.181: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:37:47.326: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:37:47.326: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:37:49.333: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:37:49.333: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:38:04.552: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:38:04.552: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:38:06.561: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:38:06.561: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:38:21.748: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:38:21.748: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:38:23.769: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:38:23.770: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:38:39.003: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:38:39.003: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:38:41.012: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:38:41.012: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:38:56.167: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:38:56.167: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:38:58.173: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:38:58.173: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:39:13.289: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:39:13.289: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:39:15.294: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:39:15.294: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:39:30.404: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:39:30.404: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:39:32.410: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:39:32.410: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:39:47.514: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:39:47.514: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:39:49.519: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:39:49.519: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:40:04.631: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:40:04.631: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:40:06.637: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:40:06.637: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:40:21.731: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:40:21.731: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:40:23.738: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:40:23.738: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:40:38.846: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:40:38.846: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:40:40.851: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:40:40.851: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:40:55.948: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:40:55.948: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:40:57.953: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:40:57.953: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:41:13.052: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:41:13.052: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:41:15.061: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:41:15.061: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:41:30.157: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:41:30.157: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:41:32.167: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:41:32.167: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:41:47.307: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:41:47.307: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:41:49.311: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:41:49.311: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:42:04.410: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:42:04.410: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:42:06.416: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:42:06.416: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:42:21.517: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:42:21.518: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:42:23.535: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:42:23.536: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:42:38.666: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:42:38.666: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:42:40.671: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:42:40.671: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:42:55.765: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:42:55.765: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:42:57.771: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:42:57.771: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:43:12.877: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:43:12.877: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:43:14.885: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:43:14.885: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:43:30.011: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:43:30.011: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:43:32.018: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:43:32.018: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:43:47.134: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:43:47.134: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:43:49.139: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:43:49.139: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:44:04.232: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:44:04.232: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:44:06.237: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:44:06.237: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:44:21.335: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:44:21.335: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:44:23.340: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:44:23.340: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:44:38.464: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:44:38.464: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:44:40.469: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:44:40.469: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:44:55.580: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:44:55.580: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:44:57.585: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:44:57.585: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:45:12.682: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:45:12.682: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:45:14.687: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:45:14.687: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:45:29.787: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:45:29.787: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:45:31.798: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:45:31.798: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:45:46.939: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:45:46.939: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:45:48.946: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:45:48.946: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:46:04.080: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:46:04.080: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:46:06.085: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:46:06.085: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:46:21.194: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:46:21.194: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:46:23.199: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:46:23.199: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:46:38.305: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:46:38.305: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:46:40.312: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:46:40.312: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:46:55.441: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:46:55.441: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:46:57.447: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:46:57.447: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:47:12.542: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:47:12.542: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:47:14.547: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:47:14.547: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:47:29.688: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:47:29.688: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:47:31.695: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:47:31.695: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:47:46.808: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:47:46.808: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:47:48.814: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:47:48.814: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:48:03.906: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:48:03.907: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:48:05.912: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:48:05.912: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:48:21.028: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:48:21.028: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:48:23.035: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:48:23.035: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:48:38.134: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:48:38.134: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:48:40.140: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:48:40.140: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:48:55.243: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:48:55.243: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:48:57.248: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:48:57.248: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:49:12.380: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:49:12.380: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:49:14.400: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:49:14.400: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:49:29.596: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:49:29.596: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:49:31.608: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:49:31.608: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:49:46.773: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:49:46.773: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:49:48.781: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6844 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:49:48.781: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:50:03.940: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:50:03.940: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:50:05.940: INFO: 
    Output of kubectl describe pod pod-network-test-6844/netserver-0:
    
    Sep  5 14:50:05.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-6844 describe pod netserver-0 --namespace=pod-network-test-6844'
    Sep  5 14:50:06.159: INFO: stderr: ""
... skipping 237 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  13m   default-scheduler  Successfully assigned pod-network-test-6844/netserver-3 to k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd
      Normal  Pulled     13m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine
      Normal  Created    13m   kubelet            Created container webserver
      Normal  Started    13m   kubelet            Started container webserver
    
    Sep  5 14:50:06.807: FAIL: Error dialing HTTP node to pod failed to find expected endpoints, 

    tries 46
    Command curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName
    retrieved map[]
    expected map[netserver-3:{}]
    
    Full Stack Trace
... skipping 16 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
        should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep  5 14:50:06.808: Error dialing HTTP node to pod failed to find expected endpoints, 

        tries 46
        Command curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.31:8083/hostName
        retrieved map[]
        expected map[netserver-3:{}]
    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
... skipping 19 lines ...
    STEP: Creating a validating webhook configuration
    Sep  5 14:49:59.272: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:50:09.396: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:50:19.507: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:50:29.597: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:50:39.628: INFO: Waiting for webhook configuration to be ready...
    Sep  5 14:50:39.628: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000248290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      patching/updating a validating webhook should work [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 14:50:39.628: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000248290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:432
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":60,"skipped":1221,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:50:39.826: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-4835-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":61,"skipped":1221,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 14:50:44.628: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a495641-2d9a-4df9-aea7-d070ff67fd62" in namespace "downward-api-1752" to be "Succeeded or Failed"

    Sep  5 14:50:44.638: INFO: Pod "downwardapi-volume-9a495641-2d9a-4df9-aea7-d070ff67fd62": Phase="Pending", Reason="", readiness=false. Elapsed: 9.369477ms
    Sep  5 14:50:46.646: INFO: Pod "downwardapi-volume-9a495641-2d9a-4df9-aea7-d070ff67fd62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017674261s
    Sep  5 14:50:48.654: INFO: Pod "downwardapi-volume-9a495641-2d9a-4df9-aea7-d070ff67fd62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025659502s
    STEP: Saw pod success
    Sep  5 14:50:48.654: INFO: Pod "downwardapi-volume-9a495641-2d9a-4df9-aea7-d070ff67fd62" satisfied condition "Succeeded or Failed"

    Sep  5 14:50:48.665: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod downwardapi-volume-9a495641-2d9a-4df9-aea7-d070ff67fd62 container client-container: <nil>
    STEP: delete the pod
    Sep  5 14:50:48.716: INFO: Waiting for pod downwardapi-volume-9a495641-2d9a-4df9-aea7-d070ff67fd62 to disappear
    Sep  5 14:50:48.729: INFO: Pod downwardapi-volume-9a495641-2d9a-4df9-aea7-d070ff67fd62 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:50:48.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-1752" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":62,"skipped":1247,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:50:48.888: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename job
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a job
    STEP: Ensuring job reaches completions
    [AfterEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:50:58.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-6831" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":63,"skipped":1270,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 14:50:59.281: INFO: Waiting up to 5m0s for pod "downwardapi-volume-67c2a5a0-d10d-493f-aed9-251a1e1238d3" in namespace "downward-api-8799" to be "Succeeded or Failed"

    Sep  5 14:50:59.289: INFO: Pod "downwardapi-volume-67c2a5a0-d10d-493f-aed9-251a1e1238d3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.463822ms
    Sep  5 14:51:01.296: INFO: Pod "downwardapi-volume-67c2a5a0-d10d-493f-aed9-251a1e1238d3": Phase="Running", Reason="", readiness=true. Elapsed: 2.014685436s
    Sep  5 14:51:03.309: INFO: Pod "downwardapi-volume-67c2a5a0-d10d-493f-aed9-251a1e1238d3": Phase="Running", Reason="", readiness=false. Elapsed: 4.02771695s
    Sep  5 14:51:05.315: INFO: Pod "downwardapi-volume-67c2a5a0-d10d-493f-aed9-251a1e1238d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033963894s
    STEP: Saw pod success
    Sep  5 14:51:05.315: INFO: Pod "downwardapi-volume-67c2a5a0-d10d-493f-aed9-251a1e1238d3" satisfied condition "Succeeded or Failed"

    Sep  5 14:51:05.323: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-9oo03u pod downwardapi-volume-67c2a5a0-d10d-493f-aed9-251a1e1238d3 container client-container: <nil>
    STEP: delete the pod
    Sep  5 14:51:05.377: INFO: Waiting for pod downwardapi-volume-67c2a5a0-d10d-493f-aed9-251a1e1238d3 to disappear
    Sep  5 14:51:05.383: INFO: Pod downwardapi-volume-67c2a5a0-d10d-493f-aed9-251a1e1238d3 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:51:05.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-8799" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":64,"skipped":1305,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:51:08.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4599" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":65,"skipped":1313,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:52:09.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-8334" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":66,"skipped":1342,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:52:14.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-4871" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":67,"skipped":1407,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:52:21.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4225" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":68,"skipped":1430,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:52:21.578: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail to create secret due to empty secret key [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name secret-emptykey-test-4d30f70f-c9b7-4a31-a00e-35fdaef74886
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:52:21.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-1210" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":69,"skipped":1467,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 62 lines ...
    STEP: Destroying namespace "services-5779" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":70,"skipped":1490,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
    STEP: Destroying namespace "services-9041" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":71,"skipped":1496,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:53:13.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-700" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":72,"skipped":1504,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 104 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:53:20.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-1403" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":73,"skipped":1522,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide podname only [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 14:53:21.177: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9cefa683-d76c-4ca3-a97c-0eed6632f005" in namespace "projected-5769" to be "Succeeded or Failed"

    Sep  5 14:53:21.185: INFO: Pod "downwardapi-volume-9cefa683-d76c-4ca3-a97c-0eed6632f005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.242577ms
    Sep  5 14:53:23.194: INFO: Pod "downwardapi-volume-9cefa683-d76c-4ca3-a97c-0eed6632f005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016590821s
    Sep  5 14:53:25.201: INFO: Pod "downwardapi-volume-9cefa683-d76c-4ca3-a97c-0eed6632f005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023661287s
    STEP: Saw pod success
    Sep  5 14:53:25.201: INFO: Pod "downwardapi-volume-9cefa683-d76c-4ca3-a97c-0eed6632f005" satisfied condition "Succeeded or Failed"

    Sep  5 14:53:25.208: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd pod downwardapi-volume-9cefa683-d76c-4ca3-a97c-0eed6632f005 container client-container: <nil>
    STEP: delete the pod
    Sep  5 14:53:25.240: INFO: Waiting for pod downwardapi-volume-9cefa683-d76c-4ca3-a97c-0eed6632f005 to disappear
    Sep  5 14:53:25.246: INFO: Pod downwardapi-volume-9cefa683-d76c-4ca3-a97c-0eed6632f005 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:53:25.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5769" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":74,"skipped":1549,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:53:27.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-307" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":75,"skipped":1552,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
    • [SLOW TEST:300.187 seconds]
    [sig-apps] CronJob
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      should not schedule jobs when suspended [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":51,"skipped":1023,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    Sep  5 14:54:19.428: INFO: Unable to read jessie_udp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:19.435: INFO: Unable to read jessie_tcp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:19.443: INFO: Unable to read jessie_udp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:19.454: INFO: Unable to read jessie_tcp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:19.463: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:19.472: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:19.531: INFO: Lookups using dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5911 wheezy_tcp@dns-test-service.dns-5911 wheezy_udp@dns-test-service.dns-5911.svc wheezy_tcp@dns-test-service.dns-5911.svc wheezy_udp@_http._tcp.dns-test-service.dns-5911.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5911.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5911 jessie_tcp@dns-test-service.dns-5911 jessie_udp@dns-test-service.dns-5911.svc jessie_tcp@dns-test-service.dns-5911.svc jessie_udp@_http._tcp.dns-test-service.dns-5911.svc jessie_tcp@_http._tcp.dns-test-service.dns-5911.svc]

    
    Sep  5 14:54:24.541: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:24.551: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:24.560: INFO: Unable to read wheezy_udp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:24.567: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:24.577: INFO: Unable to read wheezy_udp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:24.589: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:24.677: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:24.687: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:24.693: INFO: Unable to read jessie_udp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:24.702: INFO: Unable to read jessie_tcp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:24.710: INFO: Unable to read jessie_udp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:24.722: INFO: Unable to read jessie_tcp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:24.793: INFO: Lookups using dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5911 wheezy_tcp@dns-test-service.dns-5911 wheezy_udp@dns-test-service.dns-5911.svc wheezy_tcp@dns-test-service.dns-5911.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5911 jessie_tcp@dns-test-service.dns-5911 jessie_udp@dns-test-service.dns-5911.svc jessie_tcp@dns-test-service.dns-5911.svc]

    
    Sep  5 14:54:29.540: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:29.547: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:29.554: INFO: Unable to read wheezy_udp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:29.562: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:29.569: INFO: Unable to read wheezy_udp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:29.576: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:29.656: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:29.665: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:29.673: INFO: Unable to read jessie_udp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:29.681: INFO: Unable to read jessie_tcp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:29.689: INFO: Unable to read jessie_udp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:29.698: INFO: Unable to read jessie_tcp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:29.776: INFO: Lookups using dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5911 wheezy_tcp@dns-test-service.dns-5911 wheezy_udp@dns-test-service.dns-5911.svc wheezy_tcp@dns-test-service.dns-5911.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5911 jessie_tcp@dns-test-service.dns-5911 jessie_udp@dns-test-service.dns-5911.svc jessie_tcp@dns-test-service.dns-5911.svc]

    
    Sep  5 14:54:34.540: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:34.549: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:34.558: INFO: Unable to read wheezy_udp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:34.567: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:34.577: INFO: Unable to read wheezy_udp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:34.583: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:34.656: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:34.662: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:34.675: INFO: Unable to read jessie_udp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:34.685: INFO: Unable to read jessie_tcp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:34.697: INFO: Unable to read jessie_udp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:34.702: INFO: Unable to read jessie_tcp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:34.776: INFO: Lookups using dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5911 wheezy_tcp@dns-test-service.dns-5911 wheezy_udp@dns-test-service.dns-5911.svc wheezy_tcp@dns-test-service.dns-5911.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5911 jessie_tcp@dns-test-service.dns-5911 jessie_udp@dns-test-service.dns-5911.svc jessie_tcp@dns-test-service.dns-5911.svc]

    
    Sep  5 14:54:39.542: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:39.553: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:39.562: INFO: Unable to read wheezy_udp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:39.572: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:39.591: INFO: Unable to read wheezy_udp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:39.601: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:39.690: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:39.700: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:39.707: INFO: Unable to read jessie_udp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:39.718: INFO: Unable to read jessie_tcp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:39.737: INFO: Unable to read jessie_udp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:39.743: INFO: Unable to read jessie_tcp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:39.837: INFO: Lookups using dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5911 wheezy_tcp@dns-test-service.dns-5911 wheezy_udp@dns-test-service.dns-5911.svc wheezy_tcp@dns-test-service.dns-5911.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5911 jessie_tcp@dns-test-service.dns-5911 jessie_udp@dns-test-service.dns-5911.svc jessie_tcp@dns-test-service.dns-5911.svc]

    
    Sep  5 14:54:44.540: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:44.551: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:44.560: INFO: Unable to read wheezy_udp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:44.575: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:44.584: INFO: Unable to read wheezy_udp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:44.592: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:44.683: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:44.691: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:44.699: INFO: Unable to read jessie_udp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:44.711: INFO: Unable to read jessie_tcp@dns-test-service.dns-5911 from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:44.720: INFO: Unable to read jessie_udp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:44.730: INFO: Unable to read jessie_tcp@dns-test-service.dns-5911.svc from pod dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad: the server could not find the requested resource (get pods dns-test-e1481beb-caa7-47ce-8461-567646c808ad)
    Sep  5 14:54:44.810: INFO: Lookups using dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5911 wheezy_tcp@dns-test-service.dns-5911 wheezy_udp@dns-test-service.dns-5911.svc wheezy_tcp@dns-test-service.dns-5911.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5911 jessie_tcp@dns-test-service.dns-5911 jessie_udp@dns-test-service.dns-5911.svc jessie_tcp@dns-test-service.dns-5911.svc]

    
    Sep  5 14:54:49.791: INFO: DNS probes using dns-5911/dns-test-e1481beb-caa7-47ce-8461-567646c808ad succeeded
    
    STEP: deleting the pod
    STEP: deleting the test service
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:54:50.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-5911" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":52,"skipped":1097,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 45 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:55:31.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-6032" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":53,"skipped":1098,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:55:31.203: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
    Sep  5 14:55:31.367: INFO: Waiting up to 5m0s for pod "security-context-a04d0342-0c43-4f7f-ad04-f798470db81f" in namespace "security-context-9842" to be "Succeeded or Failed"

    Sep  5 14:55:31.443: INFO: Pod "security-context-a04d0342-0c43-4f7f-ad04-f798470db81f": Phase="Pending", Reason="", readiness=false. Elapsed: 75.891046ms
    Sep  5 14:55:33.453: INFO: Pod "security-context-a04d0342-0c43-4f7f-ad04-f798470db81f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086132575s
    Sep  5 14:55:35.461: INFO: Pod "security-context-a04d0342-0c43-4f7f-ad04-f798470db81f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094715057s
    STEP: Saw pod success
    Sep  5 14:55:35.462: INFO: Pod "security-context-a04d0342-0c43-4f7f-ad04-f798470db81f" satisfied condition "Succeeded or Failed"

    Sep  5 14:55:35.472: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd pod security-context-a04d0342-0c43-4f7f-ad04-f798470db81f container test-container: <nil>
    STEP: delete the pod
    Sep  5 14:55:35.519: INFO: Waiting for pod security-context-a04d0342-0c43-4f7f-ad04-f798470db81f to disappear
    Sep  5 14:55:35.528: INFO: Pod security-context-a04d0342-0c43-4f7f-ad04-f798470db81f no longer exists
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:55:35.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-9842" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":54,"skipped":1103,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "webhook-1571-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":55,"skipped":1107,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:55:42.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-8922" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":56,"skipped":1108,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:55:48.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-6" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":57,"skipped":1113,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:55:48.731: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name projected-secret-test-f61204fe-ac55-4bdd-8e25-14ec63614450
    STEP: Creating a pod to test consume secrets
    Sep  5 14:55:48.847: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-74dbb4ae-a591-4409-b865-a6f6f38c7707" in namespace "projected-7312" to be "Succeeded or Failed"

    Sep  5 14:55:48.857: INFO: Pod "pod-projected-secrets-74dbb4ae-a591-4409-b865-a6f6f38c7707": Phase="Pending", Reason="", readiness=false. Elapsed: 10.215357ms
    Sep  5 14:55:50.867: INFO: Pod "pod-projected-secrets-74dbb4ae-a591-4409-b865-a6f6f38c7707": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02037319s
    Sep  5 14:55:52.876: INFO: Pod "pod-projected-secrets-74dbb4ae-a591-4409-b865-a6f6f38c7707": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029462664s
    STEP: Saw pod success
    Sep  5 14:55:52.876: INFO: Pod "pod-projected-secrets-74dbb4ae-a591-4409-b865-a6f6f38c7707" satisfied condition "Succeeded or Failed"

    Sep  5 14:55:52.887: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod pod-projected-secrets-74dbb4ae-a591-4409-b865-a6f6f38c7707 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 14:55:52.951: INFO: Waiting for pod pod-projected-secrets-74dbb4ae-a591-4409-b865-a6f6f38c7707 to disappear
    Sep  5 14:55:52.959: INFO: Pod pod-projected-secrets-74dbb4ae-a591-4409-b865-a6f6f38c7707 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:55:52.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7312" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":1139,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:55:57.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-1301" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":59,"skipped":1155,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:53:27.439: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod with failed condition

    STEP: updating the pod
    Sep  5 14:55:28.098: INFO: Successfully updated pod "var-expansion-63195f0b-1d29-4d49-a13d-ebabbef6c745"
    STEP: waiting for pod running
    STEP: deleting the pod gracefully
    Sep  5 14:55:30.117: INFO: Deleting pod "var-expansion-63195f0b-1d29-4d49-a13d-ebabbef6c745" in namespace "var-expansion-4889"
    Sep  5 14:55:30.135: INFO: Wait up to 5m0s for pod "var-expansion-63195f0b-1d29-4d49-a13d-ebabbef6c745" to be fully deleted
... skipping 6 lines ...
    • [SLOW TEST:154.741 seconds]
    [sig-node] Variable Expansion
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":76,"skipped":1554,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:55:57.908: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-map-5a8bc9de-f90d-4aab-945c-300e0ff49469
    STEP: Creating a pod to test consume secrets
    Sep  5 14:55:58.034: INFO: Waiting up to 5m0s for pod "pod-secrets-01dfa0d1-61dd-4a18-96c7-81ff2e9be29c" in namespace "secrets-7005" to be "Succeeded or Failed"

    Sep  5 14:55:58.047: INFO: Pod "pod-secrets-01dfa0d1-61dd-4a18-96c7-81ff2e9be29c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.179966ms
    Sep  5 14:56:00.057: INFO: Pod "pod-secrets-01dfa0d1-61dd-4a18-96c7-81ff2e9be29c": Phase="Running", Reason="", readiness=true. Elapsed: 2.022079918s
    Sep  5 14:56:02.066: INFO: Pod "pod-secrets-01dfa0d1-61dd-4a18-96c7-81ff2e9be29c": Phase="Running", Reason="", readiness=false. Elapsed: 4.031710449s
    Sep  5 14:56:04.075: INFO: Pod "pod-secrets-01dfa0d1-61dd-4a18-96c7-81ff2e9be29c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041044217s
    STEP: Saw pod success
    Sep  5 14:56:04.076: INFO: Pod "pod-secrets-01dfa0d1-61dd-4a18-96c7-81ff2e9be29c" satisfied condition "Succeeded or Failed"

    Sep  5 14:56:04.084: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod pod-secrets-01dfa0d1-61dd-4a18-96c7-81ff2e9be29c container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 14:56:04.119: INFO: Waiting for pod pod-secrets-01dfa0d1-61dd-4a18-96c7-81ff2e9be29c to disappear
    Sep  5 14:56:04.130: INFO: Pod pod-secrets-01dfa0d1-61dd-4a18-96c7-81ff2e9be29c no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:56:04.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-7005" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":60,"skipped":1159,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:56:04.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-5938" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":61,"skipped":1173,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:56:02.197: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep  5 14:56:02.306: INFO: Waiting up to 5m0s for pod "pod-dc57afb6-d756-4c8d-826a-70e322fc10c6" in namespace "emptydir-4345" to be "Succeeded or Failed"

    Sep  5 14:56:02.319: INFO: Pod "pod-dc57afb6-d756-4c8d-826a-70e322fc10c6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.533748ms
    Sep  5 14:56:04.339: INFO: Pod "pod-dc57afb6-d756-4c8d-826a-70e322fc10c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033012571s
    Sep  5 14:56:06.345: INFO: Pod "pod-dc57afb6-d756-4c8d-826a-70e322fc10c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038953873s
    Sep  5 14:56:08.355: INFO: Pod "pod-dc57afb6-d756-4c8d-826a-70e322fc10c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048440315s
    STEP: Saw pod success
    Sep  5 14:56:08.355: INFO: Pod "pod-dc57afb6-d756-4c8d-826a-70e322fc10c6" satisfied condition "Succeeded or Failed"

    Sep  5 14:56:08.363: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd pod pod-dc57afb6-d756-4c8d-826a-70e322fc10c6 container test-container: <nil>
    STEP: delete the pod
    Sep  5 14:56:08.415: INFO: Waiting for pod pod-dc57afb6-d756-4c8d-826a-70e322fc10c6 to disappear
    Sep  5 14:56:08.424: INFO: Pod pod-dc57afb6-d756-4c8d-826a-70e322fc10c6 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:56:08.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-4345" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":77,"skipped":1556,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:56:09.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-1888" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":62,"skipped":1192,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:56:14.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-5589" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":63,"skipped":1246,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's memory limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 14:56:08.727: INFO: Waiting up to 5m0s for pod "downwardapi-volume-210ea928-7dee-4463-8854-a080d31009ef" in namespace "projected-9052" to be "Succeeded or Failed"

    Sep  5 14:56:08.735: INFO: Pod "downwardapi-volume-210ea928-7dee-4463-8854-a080d31009ef": Phase="Pending", Reason="", readiness=false. Elapsed: 7.988239ms
    Sep  5 14:56:10.749: INFO: Pod "downwardapi-volume-210ea928-7dee-4463-8854-a080d31009ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021140398s
    Sep  5 14:56:12.757: INFO: Pod "downwardapi-volume-210ea928-7dee-4463-8854-a080d31009ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029701081s
    Sep  5 14:56:14.772: INFO: Pod "downwardapi-volume-210ea928-7dee-4463-8854-a080d31009ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044168745s
    STEP: Saw pod success
    Sep  5 14:56:14.772: INFO: Pod "downwardapi-volume-210ea928-7dee-4463-8854-a080d31009ef" satisfied condition "Succeeded or Failed"

    Sep  5 14:56:14.780: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-9oo03u pod downwardapi-volume-210ea928-7dee-4463-8854-a080d31009ef container client-container: <nil>
    STEP: delete the pod
    Sep  5 14:56:14.840: INFO: Waiting for pod downwardapi-volume-210ea928-7dee-4463-8854-a080d31009ef to disappear
    Sep  5 14:56:14.849: INFO: Pod downwardapi-volume-210ea928-7dee-4463-8854-a080d31009ef no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:56:14.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9052" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":78,"skipped":1591,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:56:20.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-3659" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":79,"skipped":1657,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] RuntimeClass
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 14:56:20.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "runtimeclass-3553" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] RuntimeClass  should support RuntimeClasses API operations [Conformance]","total":-1,"completed":80,"skipped":1671,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 65 lines ...
    STEP: Destroying namespace "services-964" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":81,"skipped":1672,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
    STEP: creating replication controller affinity-clusterip in namespace services-161
    I0905 14:56:54.133731      19 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-161, replica count: 3
    I0905 14:56:57.185160      19 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
    Sep  5 14:56:57.194: INFO: Creating new exec pod
    Sep  5 14:57:00.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:02.439: INFO: rc: 1
    Sep  5 14:57:02.439: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:57:03.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:05.642: INFO: rc: 1
    Sep  5 14:57:05.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:57:06.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:08.644: INFO: rc: 1
    Sep  5 14:57:08.644: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:57:09.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:11.647: INFO: rc: 1
    Sep  5 14:57:11.647: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:57:12.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:14.622: INFO: rc: 1
    Sep  5 14:57:14.622: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:57:15.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:17.634: INFO: rc: 1
    Sep  5 14:57:17.634: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:57:18.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:20.631: INFO: rc: 1
    Sep  5 14:57:20.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:57:21.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:23.681: INFO: rc: 1
    Sep  5 14:57:23.681: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:57:24.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:26.641: INFO: rc: 1
    Sep  5 14:57:26.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:57:27.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:29.639: INFO: rc: 1
    Sep  5 14:57:29.639: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:57:30.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:32.638: INFO: rc: 1
    Sep  5 14:57:32.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:57:33.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:35.635: INFO: rc: 1
    Sep  5 14:57:35.635: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:57:36.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:38.638: INFO: rc: 1
    Sep  5 14:57:38.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:57:39.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:41.636: INFO: rc: 1
    Sep  5 14:57:41.636: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:57:42.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:44.631: INFO: rc: 1
    Sep  5 14:57:44.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:57:45.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:47.631: INFO: rc: 1
    Sep  5 14:57:47.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:57:48.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:50.668: INFO: rc: 1
    Sep  5 14:57:50.668: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:57:51.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:53.655: INFO: rc: 1
    Sep  5 14:57:53.655: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:57:54.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:56.686: INFO: rc: 1
    Sep  5 14:57:56.687: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:57:57.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:57:59.618: INFO: rc: 1
    Sep  5 14:57:59.618: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:00.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:02.651: INFO: rc: 1
    Sep  5 14:58:02.652: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:03.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:05.647: INFO: rc: 1
    Sep  5 14:58:05.647: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:06.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:08.635: INFO: rc: 1
    Sep  5 14:58:08.635: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:09.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:11.642: INFO: rc: 1
    Sep  5 14:58:11.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:12.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:14.656: INFO: rc: 1
    Sep  5 14:58:14.656: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:15.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:17.638: INFO: rc: 1
    Sep  5 14:58:17.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + + nc -v -t -wecho 2 affinity-clusterip hostName 80
    
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:18.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:20.636: INFO: rc: 1
    Sep  5 14:58:20.636: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:21.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:23.644: INFO: rc: 1
    Sep  5 14:58:23.644: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:24.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:26.632: INFO: rc: 1
    Sep  5 14:58:26.632: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:27.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:29.648: INFO: rc: 1
    Sep  5 14:58:29.648: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:30.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:32.638: INFO: rc: 1
    Sep  5 14:58:32.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:33.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:35.637: INFO: rc: 1
    Sep  5 14:58:35.637: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:36.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:38.669: INFO: rc: 1
    Sep  5 14:58:38.669: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:39.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:41.652: INFO: rc: 1
    Sep  5 14:58:41.652: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:42.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:44.659: INFO: rc: 1
    Sep  5 14:58:44.659: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:45.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:47.625: INFO: rc: 1
    Sep  5 14:58:47.625: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:48.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:50.676: INFO: rc: 1
    Sep  5 14:58:50.676: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:51.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:53.658: INFO: rc: 1
    Sep  5 14:58:53.658: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:54.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:56.638: INFO: rc: 1
    Sep  5 14:58:56.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:58:57.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:58:59.645: INFO: rc: 1
    Sep  5 14:58:59.645: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:59:00.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:59:02.641: INFO: rc: 1
    Sep  5 14:59:02.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:59:02.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:59:04.851: INFO: rc: 1
    Sep  5 14:59:04.851: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-161 exec execpod-affinitygzppr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:59:04.851: FAIL: Unexpected error:

        <*errors.errorString | 0xc002f825f0>: {
            s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol
    occurred
    
... skipping 27 lines ...
    • Failure [133.413 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 14:59:04.851: Unexpected error:

          <*errors.errorString | 0xc002f825f0>: {
              s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol
      occurred
    
... skipping 23 lines ...
    • [SLOW TEST:242.919 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":64,"skipped":1255,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:00:19.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-1153" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":65,"skipped":1277,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with downward pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-downwardapi-lt8p
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  5 15:00:19.939: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-lt8p" in namespace "subpath-8758" to be "Succeeded or Failed"

    Sep  5 15:00:19.943: INFO: Pod "pod-subpath-test-downwardapi-lt8p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141918ms
    Sep  5 15:00:21.949: INFO: Pod "pod-subpath-test-downwardapi-lt8p": Phase="Running", Reason="", readiness=true. Elapsed: 2.009810258s
    Sep  5 15:00:23.955: INFO: Pod "pod-subpath-test-downwardapi-lt8p": Phase="Running", Reason="", readiness=true. Elapsed: 4.016665166s
    Sep  5 15:00:25.961: INFO: Pod "pod-subpath-test-downwardapi-lt8p": Phase="Running", Reason="", readiness=true. Elapsed: 6.022226926s
    Sep  5 15:00:27.968: INFO: Pod "pod-subpath-test-downwardapi-lt8p": Phase="Running", Reason="", readiness=true. Elapsed: 8.029577065s
    Sep  5 15:00:29.974: INFO: Pod "pod-subpath-test-downwardapi-lt8p": Phase="Running", Reason="", readiness=true. Elapsed: 10.035165788s
... skipping 2 lines ...
    Sep  5 15:00:35.998: INFO: Pod "pod-subpath-test-downwardapi-lt8p": Phase="Running", Reason="", readiness=true. Elapsed: 16.059197452s
    Sep  5 15:00:38.005: INFO: Pod "pod-subpath-test-downwardapi-lt8p": Phase="Running", Reason="", readiness=true. Elapsed: 18.06593207s
    Sep  5 15:00:40.011: INFO: Pod "pod-subpath-test-downwardapi-lt8p": Phase="Running", Reason="", readiness=true. Elapsed: 20.071822943s
    Sep  5 15:00:42.017: INFO: Pod "pod-subpath-test-downwardapi-lt8p": Phase="Running", Reason="", readiness=false. Elapsed: 22.078506343s
    Sep  5 15:00:44.024: INFO: Pod "pod-subpath-test-downwardapi-lt8p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.085028193s
    STEP: Saw pod success
    Sep  5 15:00:44.024: INFO: Pod "pod-subpath-test-downwardapi-lt8p" satisfied condition "Succeeded or Failed"

    Sep  5 15:00:44.031: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd pod pod-subpath-test-downwardapi-lt8p container test-container-subpath-downwardapi-lt8p: <nil>
    STEP: delete the pod
    Sep  5 15:00:44.075: INFO: Waiting for pod pod-subpath-test-downwardapi-lt8p to disappear
    Sep  5 15:00:44.079: INFO: Pod pod-subpath-test-downwardapi-lt8p no longer exists
    STEP: Deleting pod pod-subpath-test-downwardapi-lt8p
    Sep  5 15:00:44.079: INFO: Deleting pod "pod-subpath-test-downwardapi-lt8p" in namespace "subpath-8758"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:00:44.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-8758" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":66,"skipped":1290,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SS
    ------------------------------
    {"msg":"FAILED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":81,"skipped":1676,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:59:07.485: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 6 lines ...
    STEP: creating replication controller affinity-clusterip in namespace services-1706
    I0905 14:59:07.558917      19 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-1706, replica count: 3
    I0905 14:59:10.609363      19 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
    Sep  5 14:59:10.619: INFO: Creating new exec pod
    Sep  5 14:59:13.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:59:15.818: INFO: rc: 1
    Sep  5 14:59:15.818: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:59:16.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:59:19.039: INFO: rc: 1
    Sep  5 14:59:19.039: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:59:19.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:59:22.039: INFO: rc: 1
    Sep  5 14:59:22.039: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:59:22.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:59:25.012: INFO: rc: 1
    Sep  5 14:59:25.012: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:59:25.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:59:28.033: INFO: rc: 1
    Sep  5 14:59:28.033: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:59:28.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:59:31.065: INFO: rc: 1
    Sep  5 14:59:31.065: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:59:31.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:59:34.034: INFO: rc: 1
    Sep  5 14:59:34.035: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:59:34.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:59:37.019: INFO: rc: 1
    Sep  5 14:59:37.019: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:59:37.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:59:40.042: INFO: rc: 1
    Sep  5 14:59:40.042: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:59:40.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:59:43.012: INFO: rc: 1
    Sep  5 14:59:43.012: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:59:43.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:59:46.011: INFO: rc: 1
    Sep  5 14:59:46.011: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:59:46.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:59:49.036: INFO: rc: 1
    Sep  5 14:59:49.036: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:59:49.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:59:52.013: INFO: rc: 1
    Sep  5 14:59:52.013: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:59:52.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:59:55.014: INFO: rc: 1
    Sep  5 14:59:55.014: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:59:55.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 14:59:58.022: INFO: rc: 1
    Sep  5 14:59:58.022: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 14:59:58.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:01.005: INFO: rc: 1
    Sep  5 15:00:01.005: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:01.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:04.048: INFO: rc: 1
    Sep  5 15:00:04.048: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:04.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:07.026: INFO: rc: 1
    Sep  5 15:00:07.026: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + nc -v -t -w 2 affinity-clusterip 80
    + echo hostName
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:07.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:10.061: INFO: rc: 1
    Sep  5 15:00:10.061: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:10.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:12.999: INFO: rc: 1
    Sep  5 15:00:12.999: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:13.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:16.028: INFO: rc: 1
    Sep  5 15:00:16.028: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:16.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:19.012: INFO: rc: 1
    Sep  5 15:00:19.012: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:19.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:22.010: INFO: rc: 1
    Sep  5 15:00:22.010: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:22.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:25.049: INFO: rc: 1
    Sep  5 15:00:25.049: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:25.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:28.033: INFO: rc: 1
    Sep  5 15:00:28.033: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:28.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:31.083: INFO: rc: 1
    Sep  5 15:00:31.083: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:31.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:34.060: INFO: rc: 1
    Sep  5 15:00:34.061: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:34.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:37.076: INFO: rc: 1
    Sep  5 15:00:37.076: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:37.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:40.048: INFO: rc: 1
    Sep  5 15:00:40.048: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:40.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:43.052: INFO: rc: 1
    Sep  5 15:00:43.052: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:43.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:46.037: INFO: rc: 1
    Sep  5 15:00:46.037: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:46.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:49.045: INFO: rc: 1
    Sep  5 15:00:49.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:49.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:52.019: INFO: rc: 1
    Sep  5 15:00:52.019: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:52.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:55.053: INFO: rc: 1
    Sep  5 15:00:55.054: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:55.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:00:58.021: INFO: rc: 1
    Sep  5 15:00:58.021: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:00:58.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:01:01.046: INFO: rc: 1
    Sep  5 15:01:01.046: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:01:01.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:01:04.118: INFO: rc: 1
    Sep  5 15:01:04.118: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:01:04.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:01:07.025: INFO: rc: 1
    Sep  5 15:01:07.025: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:01:07.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:01:10.081: INFO: rc: 1
    Sep  5 15:01:10.081: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:01:10.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:01:13.022: INFO: rc: 1
    Sep  5 15:01:13.022: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:01:13.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:01:16.027: INFO: rc: 1
    Sep  5 15:01:16.027: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:01:16.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:01:18.268: INFO: rc: 1
    Sep  5 15:01:18.268: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1706 exec execpod-affinityt9pbb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:01:18.269: FAIL: Unexpected error:

        <*errors.errorString | 0xc0011d4500>: {
            s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol
    occurred
    
... skipping 27 lines ...
    • Failure [133.239 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 15:01:18.269: Unexpected error:

          <*errors.errorString | 0xc0011d4500>: {
              s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3278
    ------------------------------
    {"msg":"FAILED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":81,"skipped":1676,"failed":9,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:01:20.728: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 6 lines ...
    STEP: creating replication controller affinity-clusterip in namespace services-6869
    I0905 15:01:20.830904      19 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-6869, replica count: 3
    I0905 15:01:23.881838      19 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
    Sep  5 15:01:23.890: INFO: Creating new exec pod
    Sep  5 15:01:26.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:01:29.116: INFO: rc: 1
    Sep  5 15:01:29.116: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:01:30.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:01:32.303: INFO: rc: 1
    Sep  5 15:01:32.304: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:01:33.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:01:35.329: INFO: rc: 1
    Sep  5 15:01:35.329: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:01:36.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:01:38.331: INFO: rc: 1
    Sep  5 15:01:38.331: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:01:39.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:01:41.350: INFO: rc: 1
    Sep  5 15:01:41.350: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:01:42.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:01:44.306: INFO: rc: 1
    Sep  5 15:01:44.306: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:01:45.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:01:47.323: INFO: rc: 1
    Sep  5 15:01:47.323: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:01:48.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:01:50.318: INFO: rc: 1
    Sep  5 15:01:50.318: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:01:51.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:01:53.298: INFO: rc: 1
    Sep  5 15:01:53.298: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:01:54.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:01:56.300: INFO: rc: 1
    Sep  5 15:01:56.301: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:01:57.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:01:59.314: INFO: rc: 1
    Sep  5 15:01:59.314: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:00.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:02.327: INFO: rc: 1
    Sep  5 15:02:02.327: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:03.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:05.313: INFO: rc: 1
    Sep  5 15:02:05.313: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:06.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:08.309: INFO: rc: 1
    Sep  5 15:02:08.310: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + + echo hostNamenc
     -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:09.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:11.319: INFO: rc: 1
    Sep  5 15:02:11.319: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + + nc -v -t -w 2 affinity-clusterip 80
    echo hostName
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:12.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:14.321: INFO: rc: 1
    Sep  5 15:02:14.321: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:15.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:17.339: INFO: rc: 1
    Sep  5 15:02:17.339: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:18.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:20.333: INFO: rc: 1
    Sep  5 15:02:20.333: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:21.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:23.379: INFO: rc: 1
    Sep  5 15:02:23.380: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:24.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:26.324: INFO: rc: 1
    Sep  5 15:02:26.324: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:27.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:29.336: INFO: rc: 1
    Sep  5 15:02:29.336: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:30.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:32.316: INFO: rc: 1
    Sep  5 15:02:32.316: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:33.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:35.319: INFO: rc: 1
    Sep  5 15:02:35.319: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:36.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:38.324: INFO: rc: 1
    Sep  5 15:02:38.324: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:39.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:41.339: INFO: rc: 1
    Sep  5 15:02:41.339: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:42.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:44.334: INFO: rc: 1
    Sep  5 15:02:44.335: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:45.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:47.337: INFO: rc: 1
    Sep  5 15:02:47.337: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:48.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:50.323: INFO: rc: 1
    Sep  5 15:02:50.323: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:51.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:53.317: INFO: rc: 1
    Sep  5 15:02:53.317: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:54.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:56.288: INFO: rc: 1
    Sep  5 15:02:56.288: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:02:57.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:02:59.329: INFO: rc: 1
    Sep  5 15:02:59.329: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:03:00.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:03:02.304: INFO: rc: 1
    Sep  5 15:03:02.304: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:03:03.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:03:05.339: INFO: rc: 1
    Sep  5 15:03:05.339: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:03:06.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:03:08.302: INFO: rc: 1
    Sep  5 15:03:08.302: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:03:09.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:03:11.330: INFO: rc: 1
    Sep  5 15:03:11.330: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:03:12.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:03:14.295: INFO: rc: 1
    Sep  5 15:03:14.295: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:03:15.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:03:17.314: INFO: rc: 1
    Sep  5 15:03:17.314: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:03:18.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:03:20.304: INFO: rc: 1
    Sep  5 15:03:20.305: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:03:21.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:03:23.318: INFO: rc: 1
    Sep  5 15:03:23.318: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:03:24.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:03:26.310: INFO: rc: 1
    Sep  5 15:03:26.310: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + + ncecho -v -t hostName -w
     2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:03:27.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:03:29.343: INFO: rc: 1
    Sep  5 15:03:29.343: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + + nc -v -t -w 2 affinity-clusterip 80
    echo hostName
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:03:29.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:03:31.541: INFO: rc: 1
    Sep  5 15:03:31.541: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6869 exec execpod-affinitywdc7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:03:31.542: FAIL: Unexpected error:

        <*errors.errorString | 0xc004290920>: {
            s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol
    occurred
    
... skipping 27 lines ...
    • Failure [132.836 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 15:03:31.542: Unexpected error:

          <*errors.errorString | 0xc004290920>: {
              s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3278
    ------------------------------
    {"msg":"FAILED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":81,"skipped":1676,"failed":10,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:03:35.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-9049" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":82,"skipped":1704,"failed":10,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:03:35.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-4294" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":83,"skipped":1726,"failed":10,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":683,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 14:50:06.840: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pod-network-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 32 lines ...
    Sep  5 14:50:33.524: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.6.40:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:50:33.524: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:50:33.693: INFO: Found all 1 expected endpoints: [netserver-2]
    Sep  5 14:50:33.693: INFO: Going to poll 192.168.2.64 on port 8083 at least 0 times, with a maximum of 46 tries before failing
    Sep  5 14:50:33.700: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:50:33.700: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:50:48.861: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:50:48.861: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:50:50.869: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:50:50.869: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:51:06.071: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:51:06.071: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:51:08.079: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:51:08.079: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:51:23.268: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:51:23.268: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:51:25.276: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:51:25.277: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:51:40.431: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:51:40.432: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:51:42.441: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:51:42.442: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:51:57.588: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:51:57.589: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:51:59.599: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:51:59.599: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:52:14.737: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:52:14.737: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:52:16.748: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:52:16.748: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:52:31.914: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:52:31.914: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:52:33.922: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:52:33.923: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:52:49.091: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:52:49.092: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:52:51.099: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:52:51.099: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:53:06.245: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:53:06.245: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:53:08.257: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:53:08.258: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:53:23.463: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:53:23.463: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:53:25.472: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:53:25.472: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:53:40.637: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:53:40.637: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:53:42.645: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:53:42.645: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:53:57.821: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:53:57.821: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:53:59.830: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:53:59.830: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:54:14.993: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:54:14.993: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:54:17.002: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:54:17.002: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:54:32.218: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:54:32.218: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:54:34.226: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:54:34.226: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:54:49.378: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:54:49.378: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:54:51.394: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:54:51.394: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:55:06.689: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:55:06.689: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:55:08.697: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:55:08.697: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:55:23.894: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:55:23.894: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:55:25.903: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:55:25.903: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:55:41.100: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:55:41.100: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:55:43.110: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:55:43.110: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:55:58.309: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:55:58.309: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:56:00.319: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:56:00.320: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:56:15.552: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:56:15.552: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:56:17.563: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:56:17.563: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:56:32.757: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:56:32.757: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:56:34.762: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:56:34.762: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:56:49.868: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:56:49.868: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:56:51.876: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:56:51.877: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:57:06.993: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:57:06.993: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:57:08.998: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:57:08.998: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:57:24.110: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:57:24.110: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:57:26.118: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:57:26.118: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:57:41.232: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:57:41.232: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:57:43.238: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:57:43.238: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:57:58.354: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:57:58.354: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:58:00.359: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:58:00.359: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:58:15.471: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:58:15.471: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:58:17.476: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:58:17.476: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:58:32.575: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:58:32.575: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:58:34.581: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:58:34.581: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:58:49.680: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:58:49.680: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:58:51.686: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:58:51.686: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:59:06.784: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:59:06.784: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:59:08.790: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:59:08.790: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:59:23.894: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:59:23.894: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:59:25.900: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:59:25.900: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:59:41.023: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:59:41.023: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 14:59:43.030: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 14:59:43.030: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 14:59:58.136: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 14:59:58.136: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 15:00:00.143: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 15:00:00.143: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 15:00:15.257: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 15:00:15.257: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 15:00:17.263: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 15:00:17.263: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 15:00:32.364: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 15:00:32.364: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 15:00:34.370: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 15:00:34.370: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 15:00:49.497: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 15:00:49.497: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 15:00:51.503: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 15:00:51.503: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 15:01:06.627: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 15:01:06.627: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 15:01:08.633: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 15:01:08.633: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 15:01:23.729: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 15:01:23.730: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 15:01:25.735: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 15:01:25.735: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 15:01:40.867: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 15:01:40.867: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 15:01:42.872: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 15:01:42.873: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 15:01:57.980: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 15:01:57.980: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 15:01:59.987: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 15:01:59.987: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 15:02:15.093: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 15:02:15.093: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 15:02:17.099: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 15:02:17.099: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 15:02:32.218: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 15:02:32.218: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 15:02:34.225: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 15:02:34.225: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 15:02:49.316: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 15:02:49.316: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 15:02:51.322: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 15:02:51.322: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 15:03:06.416: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 15:03:06.416: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 15:03:08.424: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 15:03:08.424: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 15:03:23.530: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 15:03:23.531: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 15:03:25.536: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7355 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 15:03:25.536: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 15:03:40.640: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  5 15:03:40.640: INFO: Waiting for [netserver-3] endpoints (expected=[netserver-3], actual=[])
    Sep  5 15:03:42.640: INFO: 
    Output of kubectl describe pod pod-network-test-7355/netserver-0:
    
    Sep  5 15:03:42.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7355 describe pod netserver-0 --namespace=pod-network-test-7355'
    Sep  5 15:03:42.757: INFO: stderr: ""
... skipping 237 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  13m   default-scheduler  Successfully assigned pod-network-test-7355/netserver-3 to k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd
      Normal  Pulled     13m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine
      Normal  Created    13m   kubelet            Created container webserver
      Normal  Started    13m   kubelet            Started container webserver
    
    Sep  5 15:03:43.152: FAIL: Error dialing HTTP node to pod failed to find expected endpoints, 

    tries 46
    Command curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName
    retrieved map[]
    expected map[netserver-3:{}]
    
    Full Stack Trace
... skipping 16 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
        should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep  5 15:03:43.152: Error dialing HTTP node to pod failed to find expected endpoints, 

        tries 46
        Command curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8083/hostName
        retrieved map[]
        expected map[netserver-3:{}]
    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:03:49.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-2311" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":84,"skipped":1729,"failed":10,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:03:52.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-9774" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":85,"skipped":1730,"failed":10,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:03:52.645: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-5fc6b23c-c429-4af2-a255-4b63c633d538
    STEP: Creating a pod to test consume secrets
    Sep  5 15:03:52.697: INFO: Waiting up to 5m0s for pod "pod-secrets-f314a9f7-3cf8-4399-bdae-a43b284e3bc4" in namespace "secrets-9822" to be "Succeeded or Failed"

    Sep  5 15:03:52.702: INFO: Pod "pod-secrets-f314a9f7-3cf8-4399-bdae-a43b284e3bc4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.760005ms
    Sep  5 15:03:54.707: INFO: Pod "pod-secrets-f314a9f7-3cf8-4399-bdae-a43b284e3bc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009451953s
    Sep  5 15:03:56.713: INFO: Pod "pod-secrets-f314a9f7-3cf8-4399-bdae-a43b284e3bc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015143813s
    STEP: Saw pod success
    Sep  5 15:03:56.713: INFO: Pod "pod-secrets-f314a9f7-3cf8-4399-bdae-a43b284e3bc4" satisfied condition "Succeeded or Failed"

    Sep  5 15:03:56.718: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod pod-secrets-f314a9f7-3cf8-4399-bdae-a43b284e3bc4 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 15:03:56.751: INFO: Waiting for pod pod-secrets-f314a9f7-3cf8-4399-bdae-a43b284e3bc4 to disappear
    Sep  5 15:03:56.755: INFO: Pod pod-secrets-f314a9f7-3cf8-4399-bdae-a43b284e3bc4 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:03:56.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-9822" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":86,"skipped":1741,"failed":10,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":683,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:03:43.171: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pod-network-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 40 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:04:07.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-1411" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":683,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:04:07.804: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep  5 15:04:07.876: INFO: Waiting up to 5m0s for pod "pod-ac9567e9-84e2-416a-8ef4-aa08c4b49086" in namespace "emptydir-893" to be "Succeeded or Failed"

    Sep  5 15:04:07.883: INFO: Pod "pod-ac9567e9-84e2-416a-8ef4-aa08c4b49086": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045404ms
    Sep  5 15:04:09.889: INFO: Pod "pod-ac9567e9-84e2-416a-8ef4-aa08c4b49086": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011797398s
    Sep  5 15:04:11.895: INFO: Pod "pod-ac9567e9-84e2-416a-8ef4-aa08c4b49086": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017920398s
    STEP: Saw pod success
    Sep  5 15:04:11.895: INFO: Pod "pod-ac9567e9-84e2-416a-8ef4-aa08c4b49086" satisfied condition "Succeeded or Failed"

    Sep  5 15:04:11.899: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod pod-ac9567e9-84e2-416a-8ef4-aa08c4b49086 container test-container: <nil>
    STEP: delete the pod
    Sep  5 15:04:11.921: INFO: Waiting for pod pod-ac9567e9-84e2-416a-8ef4-aa08c4b49086 to disappear
    Sep  5 15:04:11.925: INFO: Pod pod-ac9567e9-84e2-416a-8ef4-aa08c4b49086 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:04:11.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-893" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":687,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 59 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:04:15.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-92" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":87,"skipped":1744,"failed":10,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":31,"skipped":698,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:04:12.052: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:05:24.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-5651" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":698,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:05:29.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-1230" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":33,"skipped":709,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:05:30.096: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  5 15:05:30.190: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-da54193e-3ac2-4b42-840e-65344e5ef220" in namespace "security-context-test-1479" to be "Succeeded or Failed"

    Sep  5 15:05:30.196: INFO: Pod "busybox-privileged-false-da54193e-3ac2-4b42-840e-65344e5ef220": Phase="Pending", Reason="", readiness=false. Elapsed: 5.762605ms
    Sep  5 15:05:32.203: INFO: Pod "busybox-privileged-false-da54193e-3ac2-4b42-840e-65344e5ef220": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012816604s
    Sep  5 15:05:34.210: INFO: Pod "busybox-privileged-false-da54193e-3ac2-4b42-840e-65344e5ef220": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01971117s
    Sep  5 15:05:34.210: INFO: Pod "busybox-privileged-false-da54193e-3ac2-4b42-840e-65344e5ef220" satisfied condition "Succeeded or Failed"

    Sep  5 15:05:34.226: INFO: Got logs for pod "busybox-privileged-false-da54193e-3ac2-4b42-840e-65344e5ef220": "ip: RTNETLINK answers: Operation not permitted\n"
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:05:34.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-1479" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":775,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:05:36.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-7055" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":782,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:05:40.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-755" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":792,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 15:05:40.748: INFO: Waiting up to 5m0s for pod "downwardapi-volume-748d6f1f-8506-49d8-b67d-6ddce6fd0248" in namespace "downward-api-320" to be "Succeeded or Failed"

    Sep  5 15:05:40.756: INFO: Pod "downwardapi-volume-748d6f1f-8506-49d8-b67d-6ddce6fd0248": Phase="Pending", Reason="", readiness=false. Elapsed: 6.83287ms
    Sep  5 15:05:42.763: INFO: Pod "downwardapi-volume-748d6f1f-8506-49d8-b67d-6ddce6fd0248": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014413955s
    Sep  5 15:05:44.770: INFO: Pod "downwardapi-volume-748d6f1f-8506-49d8-b67d-6ddce6fd0248": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021439726s
    STEP: Saw pod success
    Sep  5 15:05:44.770: INFO: Pod "downwardapi-volume-748d6f1f-8506-49d8-b67d-6ddce6fd0248" satisfied condition "Succeeded or Failed"

    Sep  5 15:05:44.775: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-9oo03u pod downwardapi-volume-748d6f1f-8506-49d8-b67d-6ddce6fd0248 container client-container: <nil>
    STEP: delete the pod
    Sep  5 15:05:44.811: INFO: Waiting for pod downwardapi-volume-748d6f1f-8506-49d8-b67d-6ddce6fd0248 to disappear
    Sep  5 15:05:44.815: INFO: Pod downwardapi-volume-748d6f1f-8506-49d8-b67d-6ddce6fd0248 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:05:44.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-320" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":846,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "webhook-7415-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":38,"skipped":868,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:05:48.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-1075" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":39,"skipped":871,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:06:19.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-3737" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":40,"skipped":875,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:06:19.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-9308" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":41,"skipped":922,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:06:26.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-2221" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":42,"skipped":948,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:06:34.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-1726" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":952,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:06:34.742: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap configmap-2842/configmap-test-5ef1ba82-9287-49dc-83a3-2b50bae465d2
    STEP: Creating a pod to test consume configMaps
    Sep  5 15:06:34.796: INFO: Waiting up to 5m0s for pod "pod-configmaps-a01608d8-1bf6-46b8-817c-5b6a7419dd23" in namespace "configmap-2842" to be "Succeeded or Failed"

    Sep  5 15:06:34.805: INFO: Pod "pod-configmaps-a01608d8-1bf6-46b8-817c-5b6a7419dd23": Phase="Pending", Reason="", readiness=false. Elapsed: 8.535086ms
    Sep  5 15:06:36.810: INFO: Pod "pod-configmaps-a01608d8-1bf6-46b8-817c-5b6a7419dd23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013721166s
    Sep  5 15:06:38.815: INFO: Pod "pod-configmaps-a01608d8-1bf6-46b8-817c-5b6a7419dd23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01891254s
    STEP: Saw pod success
    Sep  5 15:06:38.815: INFO: Pod "pod-configmaps-a01608d8-1bf6-46b8-817c-5b6a7419dd23" satisfied condition "Succeeded or Failed"

    Sep  5 15:06:38.820: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd pod pod-configmaps-a01608d8-1bf6-46b8-817c-5b6a7419dd23 container env-test: <nil>
    STEP: delete the pod
    Sep  5 15:06:38.848: INFO: Waiting for pod pod-configmaps-a01608d8-1bf6-46b8-817c-5b6a7419dd23 to disappear
    Sep  5 15:06:38.852: INFO: Pod pod-configmaps-a01608d8-1bf6-46b8-817c-5b6a7419dd23 no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:06:38.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-2842" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":955,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:06:43.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-9272" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":969,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    • [SLOW TEST:148.586 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should have monotonically increasing restart count [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":88,"skipped":1752,"failed":10,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 287 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  32s   default-scheduler  Successfully assigned pod-network-test-254/netserver-3 to k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd
      Normal  Pulled     32s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine
      Normal  Created    32s   kubelet            Created container webserver
      Normal  Started    31s   kubelet            Started container webserver
    
    Sep  5 15:01:16.371: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.1.54:9080/dial?request=hostname&protocol=http&host=192.168.2.90&port=8083&tries=1'
    retrieved map[]
    expected map[netserver-3:{}])
    Sep  5 15:01:16.371: INFO: ...failed...will try again in next pass

    Sep  5 15:01:16.371: INFO: Going to retry 1 out of 4 pods....
    Sep  5 15:01:16.371: INFO: Doublechecking 1 pods in host 172.18.0.6 which weren't seen the first time.
    Sep  5 15:01:16.371: INFO: Now attempting to probe pod [[[ 192.168.2.90 ]]]
    Sep  5 15:01:16.374: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.54:9080/dial?request=hostname&protocol=http&host=192.168.2.90&port=8083&tries=1'] Namespace:pod-network-test-254 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  5 15:01:16.375: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 15:01:21.479: INFO: Waiting for responses: map[netserver-3:{}]
... skipping 377 lines ...
      ----    ------     ----   ----               -------
      Normal  Scheduled  6m     default-scheduler  Successfully assigned pod-network-test-254/netserver-3 to k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd
      Normal  Pulled     6m     kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine
      Normal  Created    6m     kubelet            Created container webserver
      Normal  Started    5m59s  kubelet            Started container webserver
    
    Sep  5 15:06:44.241: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.1.54:9080/dial?request=hostname&protocol=http&host=192.168.2.90&port=8083&tries=1'
    retrieved map[]
    expected map[netserver-3:{}])
    Sep  5 15:06:44.241: INFO: ... Done probing pod [[[ 192.168.2.90 ]]]
    Sep  5 15:06:44.241: INFO: succeeded at polling 3 out of 4 connections
    Sep  5 15:06:44.241: INFO: pod polling failure summary:
    Sep  5 15:06:44.241: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.1.54:9080/dial?request=hostname&protocol=http&host=192.168.2.90&port=8083&tries=1'
    retrieved map[]
    expected map[netserver-3:{}]
    Sep  5 15:06:44.241: FAIL: failed,  1 out of 4 connections failed

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0002e2780)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
        should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep  5 15:06:44.241: failed,  1 out of 4 connections failed

    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:06:48.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-8446" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":89,"skipped":1772,"failed":10,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:06:53.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-296" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":46,"skipped":976,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:06:57.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-1757" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":47,"skipped":1000,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:06:57.447: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-744703fa-b4bd-471f-a704-992ecac252b7
    STEP: Creating a pod to test consume configMaps
    Sep  5 15:06:57.504: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bf53224b-7a6b-468d-9d89-235bf8550324" in namespace "projected-2037" to be "Succeeded or Failed"

    Sep  5 15:06:57.509: INFO: Pod "pod-projected-configmaps-bf53224b-7a6b-468d-9d89-235bf8550324": Phase="Pending", Reason="", readiness=false. Elapsed: 4.94848ms
    Sep  5 15:06:59.515: INFO: Pod "pod-projected-configmaps-bf53224b-7a6b-468d-9d89-235bf8550324": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011053128s
    Sep  5 15:07:01.524: INFO: Pod "pod-projected-configmaps-bf53224b-7a6b-468d-9d89-235bf8550324": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01995311s
    STEP: Saw pod success
    Sep  5 15:07:01.524: INFO: Pod "pod-projected-configmaps-bf53224b-7a6b-468d-9d89-235bf8550324" satisfied condition "Succeeded or Failed"

    Sep  5 15:07:01.528: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-md-0-62h4s-76fdb55b5f-tzhp2 pod pod-projected-configmaps-bf53224b-7a6b-468d-9d89-235bf8550324 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 15:07:01.551: INFO: Waiting for pod pod-projected-configmaps-bf53224b-7a6b-468d-9d89-235bf8550324 to disappear
    Sep  5 15:07:01.555: INFO: Pod pod-projected-configmaps-bf53224b-7a6b-468d-9d89-235bf8550324 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:07:01.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2037" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":1049,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: Destroying namespace "webhook-4698-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":49,"skipped":1051,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 48 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:07:09.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-882" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":50,"skipped":1056,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:07:11.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-5038" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":51,"skipped":1058,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:07:16.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-1921" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":52,"skipped":1120,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:07:16.915: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  5 15:07:16.989: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-e84a0b14-bec5-46c3-b1d5-da6b5ec944e1" in namespace "security-context-test-3871" to be "Succeeded or Failed"

    Sep  5 15:07:16.994: INFO: Pod "busybox-readonly-false-e84a0b14-bec5-46c3-b1d5-da6b5ec944e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.479899ms
    Sep  5 15:07:18.999: INFO: Pod "busybox-readonly-false-e84a0b14-bec5-46c3-b1d5-da6b5ec944e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00969126s
    Sep  5 15:07:21.005: INFO: Pod "busybox-readonly-false-e84a0b14-bec5-46c3-b1d5-da6b5ec944e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016140115s
    Sep  5 15:07:21.006: INFO: Pod "busybox-readonly-false-e84a0b14-bec5-46c3-b1d5-da6b5ec944e1" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:07:21.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-3871" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":1140,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
    STEP: Destroying namespace "services-8084" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":54,"skipped":1165,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:07:21.207: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-b84a3ed7-0074-4fdc-9561-be94e480b97d
    STEP: Creating a pod to test consume configMaps
    Sep  5 15:07:21.271: INFO: Waiting up to 5m0s for pod "pod-configmaps-441b110f-57b4-4fa4-a5a6-f8503fee8c56" in namespace "configmap-7189" to be "Succeeded or Failed"

    Sep  5 15:07:21.277: INFO: Pod "pod-configmaps-441b110f-57b4-4fa4-a5a6-f8503fee8c56": Phase="Pending", Reason="", readiness=false. Elapsed: 5.939871ms
    Sep  5 15:07:23.282: INFO: Pod "pod-configmaps-441b110f-57b4-4fa4-a5a6-f8503fee8c56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010708506s
    Sep  5 15:07:25.287: INFO: Pod "pod-configmaps-441b110f-57b4-4fa4-a5a6-f8503fee8c56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015787437s
    STEP: Saw pod success
    Sep  5 15:07:25.287: INFO: Pod "pod-configmaps-441b110f-57b4-4fa4-a5a6-f8503fee8c56" satisfied condition "Succeeded or Failed"

    Sep  5 15:07:25.291: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd pod pod-configmaps-441b110f-57b4-4fa4-a5a6-f8503fee8c56 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 15:07:25.309: INFO: Waiting for pod pod-configmaps-441b110f-57b4-4fa4-a5a6-f8503fee8c56 to disappear
    Sep  5 15:07:25.314: INFO: Pod pod-configmaps-441b110f-57b4-4fa4-a5a6-f8503fee8c56 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:07:25.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-7189" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":55,"skipped":1180,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:07:25.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-8610" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":56,"skipped":1188,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide podname only [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 15:07:25.492: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff869e6b-f83f-4a30-9b11-b7ef5e9f9a55" in namespace "downward-api-4414" to be "Succeeded or Failed"

    Sep  5 15:07:25.496: INFO: Pod "downwardapi-volume-ff869e6b-f83f-4a30-9b11-b7ef5e9f9a55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069606ms
    Sep  5 15:07:27.502: INFO: Pod "downwardapi-volume-ff869e6b-f83f-4a30-9b11-b7ef5e9f9a55": Phase="Running", Reason="", readiness=false. Elapsed: 2.010017079s
    Sep  5 15:07:29.507: INFO: Pod "downwardapi-volume-ff869e6b-f83f-4a30-9b11-b7ef5e9f9a55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015391194s
    STEP: Saw pod success
    Sep  5 15:07:29.507: INFO: Pod "downwardapi-volume-ff869e6b-f83f-4a30-9b11-b7ef5e9f9a55" satisfied condition "Succeeded or Failed"

    Sep  5 15:07:29.512: INFO: Trying to get logs from node k8s-upgrade-and-conformance-s5wz98-worker-fcjlzd pod downwardapi-volume-ff869e6b-f83f-4a30-9b11-b7ef5e9f9a55 container client-container: <nil>
    STEP: delete the pod
    Sep  5 15:07:29.533: INFO: Waiting for pod downwardapi-volume-ff869e6b-f83f-4a30-9b11-b7ef5e9f9a55 to disappear
    Sep  5 15:07:29.537: INFO: Pod downwardapi-volume-ff869e6b-f83f-4a30-9b11-b7ef5e9f9a55 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:07:29.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-4414" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":1200,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:08:00.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-6499" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":90,"skipped":1775,"failed":10,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
    STEP: creating replication controller affinity-clusterip-timeout in namespace services-2318
    I0905 15:07:31.915579      21 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-2318, replica count: 3
    I0905 15:07:34.966566      21 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
    Sep  5 15:07:34.976: INFO: Creating new exec pod
    Sep  5 15:07:37.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:07:40.220: INFO: rc: 1
    Sep  5 15:07:40.220: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:07:41.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:07:43.415: INFO: rc: 1
    Sep  5 15:07:43.415: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:07:44.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:07:46.418: INFO: rc: 1
    Sep  5 15:07:46.418: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:07:47.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:07:49.440: INFO: rc: 1
    Sep  5 15:07:49.440: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:07:50.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:07:52.416: INFO: rc: 1
    Sep  5 15:07:52.416: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:07:53.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:07:55.427: INFO: rc: 1
    Sep  5 15:07:55.428: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:07:56.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:07:58.432: INFO: rc: 1
    Sep  5 15:07:58.432: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:07:59.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:01.389: INFO: rc: 1
    Sep  5 15:08:01.389: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:02.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:04.439: INFO: rc: 1
    Sep  5 15:08:04.439: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + + ncecho hostName
     -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:05.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:07.437: INFO: rc: 1
    Sep  5 15:08:07.437: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + + echonc -v hostName -t
     -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:08.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:10.421: INFO: rc: 1
    Sep  5 15:08:10.421: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:11.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:13.394: INFO: rc: 1
    Sep  5 15:08:13.394: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:14.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:16.423: INFO: rc: 1
    Sep  5 15:08:16.423: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:17.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:19.429: INFO: rc: 1
    Sep  5 15:08:19.429: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:20.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:22.430: INFO: rc: 1
    Sep  5 15:08:22.430: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:23.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:25.402: INFO: rc: 1
    Sep  5 15:08:25.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:26.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:28.406: INFO: rc: 1
    Sep  5 15:08:28.406: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:29.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:31.414: INFO: rc: 1
    Sep  5 15:08:31.414: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:32.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:34.448: INFO: rc: 1
    Sep  5 15:08:34.449: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:35.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:37.404: INFO: rc: 1
    Sep  5 15:08:37.404: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:38.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:40.408: INFO: rc: 1
    Sep  5 15:08:40.408: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:41.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:43.404: INFO: rc: 1
    Sep  5 15:08:43.404: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:44.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:46.421: INFO: rc: 1
    Sep  5 15:08:46.421: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:47.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:49.460: INFO: rc: 1
    Sep  5 15:08:49.460: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:50.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:52.423: INFO: rc: 1
    Sep  5 15:08:52.423: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:53.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:55.397: INFO: rc: 1
    Sep  5 15:08:55.397: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:56.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:08:58.411: INFO: rc: 1
    Sep  5 15:08:58.412: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:08:59.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:09:01.392: INFO: rc: 1
    Sep  5 15:09:01.393: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:09:02.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:09:04.415: INFO: rc: 1
    Sep  5 15:09:04.416: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:09:05.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:09:07.428: INFO: rc: 1
    Sep  5 15:09:07.428: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:09:08.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:09:10.437: INFO: rc: 1
    Sep  5 15:09:10.437: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + + ncecho -v hostName -t -w
     2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:09:11.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:09:13.400: INFO: rc: 1
    Sep  5 15:09:13.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:09:14.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:09:16.430: INFO: rc: 1
    Sep  5 15:09:16.430: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:09:17.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:09:19.427: INFO: rc: 1
    Sep  5 15:09:19.427: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:09:20.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:09:22.404: INFO: rc: 1
    Sep  5 15:09:22.404: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:09:23.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:09:25.394: INFO: rc: 1
    Sep  5 15:09:25.394: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:09:26.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep  5 15:09:28.412: INFO: rc: 1
    Sep  5 15:09:28.412: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2318 exec execpod-affinityvsvr8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-timeout 80
    nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:09:29.220: INFO: Running '/usr/local/bin/kubectl --kubec