This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-10-25 15:41
Elapsed2h21m
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 609 lines ...
Trying to find master named 'bootstrap-e2e-master'
Looking for address 'bootstrap-e2e-master-ip'
Using master: bootstrap-e2e-master (external IP: 35.247.44.183; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

.......................Kubernetes cluster created.
Cluster "k8s-gci-gce-ingress1-5_bootstrap-e2e" set.
User "k8s-gci-gce-ingress1-5_bootstrap-e2e" set.
Context "k8s-gci-gce-ingress1-5_bootstrap-e2e" created.
Switched to context "k8s-gci-gce-ingress1-5_bootstrap-e2e".
... skipping 24 lines ...
bootstrap-e2e-minion-group-05w9   Ready                      <none>   11s   v1.20.0-alpha.3.114+5935fcd704fe89
bootstrap-e2e-minion-group-jzdr   Ready                      <none>   12s   v1.20.0-alpha.3.114+5935fcd704fe89
bootstrap-e2e-minion-group-nmms   Ready                      <none>   10s   v1.20.0-alpha.3.114+5935fcd704fe89
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
Cluster validation succeeded
Done, listing cluster services:
... skipping 70 lines ...
Zone: us-west1-b
Dumping logs from master locally to '/logs/artifacts/before'
Trying to find master named 'bootstrap-e2e-master'
Looking for address 'bootstrap-e2e-master-ip'
Using master: bootstrap-e2e-master (external IP: 35.247.44.183; internal IP: (not set))
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Changing logfiles to be world-readable for download
Copying 'kube-apiserver.log kube-apiserver-audit.log kube-scheduler.log kube-controller-manager.log etcd.log etcd-events.log glbc.log cluster-autoscaler.log kube-addon-manager.log konnectivity-server.log fluentd.log kubelet.cov startupscript.log' from bootstrap-e2e-master

Specify --start=57057 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/konnectivity-server.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts/before'
Detecting nodes in the cluster
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Changing logfiles to be world-readable for download
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Changing logfiles to be world-readable for download
Error: No such container: 
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-05w9
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-nmms
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-jzdr

Specify --start=69144 in the next get-serial-port-output invocation to get only the new output starting from here.
... skipping 5 lines ...
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/kubelet.cov.tmp: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=bootstrap-e2e-minion-group
NODE_NAMES=bootstrap-e2e-minion-group-05w9 bootstrap-e2e-minion-group-jzdr bootstrap-e2e-minion-group-nmms
Failures for bootstrap-e2e-minion-group (if any):
2020/10/25 16:11:47 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts/before' finished in 2m11.340198978s
2020/10/25 16:11:47 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh
Project: k8s-gci-gce-ingress1-5
... skipping 14 lines ...
Using master: bootstrap-e2e-master (external IP: 35.247.44.183; internal IP: (not set))
Oct 25 16:11:51.023: INFO: Fetching cloud provider for "gce"
I1025 16:11:51.023198  144261 test_context.go:453] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I1025 16:11:51.023856  144261 gce.go:903] Using DefaultTokenSource &oauth2.reuseTokenSource{new:jwt.jwtSource{ctx:(*context.emptyCtx)(0xc00005a0b0), conf:(*jwt.Config)(0xc001fd6a00)}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)}
W1025 16:11:51.238007  144261 gce.go:474] No network name or URL specified.
I1025 16:11:51.238167  144261 e2e.go:129] Starting e2e run "4e8f4773-3937-4db4-ae14-b12d14b9afe6" on Ginkgo node 1
{"msg":"Test Suite starting","total":306,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1603642309 - Will randomize all specs
Will run 306 of 5229 specs

Oct 25 16:11:56.219: INFO: cluster-master-image: cos-85-13310-1041-9
... skipping 23 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 25 16:11:56.970: INFO: Waiting up to 5m0s for pod "downwardapi-volume-771df37a-1ef3-4544-ae20-0e86c1863de3" in namespace "downward-api-9521" to be "Succeeded or Failed"
Oct 25 16:11:57.013: INFO: Pod "downwardapi-volume-771df37a-1ef3-4544-ae20-0e86c1863de3": Phase="Pending", Reason="", readiness=false. Elapsed: 43.120975ms
Oct 25 16:11:59.053: INFO: Pod "downwardapi-volume-771df37a-1ef3-4544-ae20-0e86c1863de3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082824273s
Oct 25 16:12:01.112: INFO: Pod "downwardapi-volume-771df37a-1ef3-4544-ae20-0e86c1863de3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141602078s
Oct 25 16:12:03.153: INFO: Pod "downwardapi-volume-771df37a-1ef3-4544-ae20-0e86c1863de3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.183446189s
STEP: Saw pod success
Oct 25 16:12:03.153: INFO: Pod "downwardapi-volume-771df37a-1ef3-4544-ae20-0e86c1863de3" satisfied condition "Succeeded or Failed"
Oct 25 16:12:03.195: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downwardapi-volume-771df37a-1ef3-4544-ae20-0e86c1863de3 container client-container: <nil>
STEP: delete the pod
Oct 25 16:12:03.315: INFO: Waiting for pod downwardapi-volume-771df37a-1ef3-4544-ae20-0e86c1863de3 to disappear
Oct 25 16:12:03.354: INFO: Pod downwardapi-volume-771df37a-1ef3-4544-ae20-0e86c1863de3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:12:03.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9521" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":306,"completed":1,"skipped":12,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:12:11.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8720" for this suite.
STEP: Destroying namespace "webhook-8720-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":306,"completed":2,"skipped":19,"failed":0}

------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 44 lines ...
Oct 25 16:12:55.323: INFO: Deleting pod "simpletest.rc-rcflq" in namespace "gc-9348"
Oct 25 16:12:55.385: INFO: Deleting pod "simpletest.rc-zqsbx" in namespace "gc-9348"
[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:12:55.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9348" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":306,"completed":3,"skipped":19,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Job
... skipping 12 lines ...
Oct 25 16:13:00.890: INFO: Terminating Job.batch foo pods took: 800.287438ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:13:41.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3914" for this suite.
•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":306,"completed":4,"skipped":47,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] StatefulSet
... skipping 44 lines ...
Oct 25 16:15:44.622: INFO: Waiting for statefulset status.replicas updated to 0
Oct 25 16:15:44.685: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:15:44.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5587" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":306,"completed":5,"skipped":105,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Oct 25 16:15:45.050: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
Oct 25 16:15:45.444: INFO: Waiting up to 5m0s for pod "downward-api-9df0d120-cdb6-45d0-b174-8297b4feabb9" in namespace "downward-api-6336" to be "Succeeded or Failed"
Oct 25 16:15:45.488: INFO: Pod "downward-api-9df0d120-cdb6-45d0-b174-8297b4feabb9": Phase="Pending", Reason="", readiness=false. Elapsed: 44.024509ms
Oct 25 16:15:47.527: INFO: Pod "downward-api-9df0d120-cdb6-45d0-b174-8297b4feabb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083832426s
Oct 25 16:15:49.572: INFO: Pod "downward-api-9df0d120-cdb6-45d0-b174-8297b4feabb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.128745891s
STEP: Saw pod success
Oct 25 16:15:49.572: INFO: Pod "downward-api-9df0d120-cdb6-45d0-b174-8297b4feabb9" satisfied condition "Succeeded or Failed"
Oct 25 16:15:49.668: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downward-api-9df0d120-cdb6-45d0-b174-8297b4feabb9 container dapi-container: <nil>
STEP: delete the pod
Oct 25 16:15:49.908: INFO: Waiting for pod downward-api-9df0d120-cdb6-45d0-b174-8297b4feabb9 to disappear
Oct 25 16:15:49.952: INFO: Pod downward-api-9df0d120-cdb6-45d0-b174-8297b4feabb9 no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:15:49.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6336" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":306,"completed":6,"skipped":138,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Oct 25 16:15:50.099: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
Oct 25 16:15:50.689: INFO: Waiting up to 5m0s for pod "downward-api-fdc4f693-2efa-45cd-881b-52f5eb3f90c9" in namespace "downward-api-8565" to be "Succeeded or Failed"
Oct 25 16:15:50.893: INFO: Pod "downward-api-fdc4f693-2efa-45cd-881b-52f5eb3f90c9": Phase="Pending", Reason="", readiness=false. Elapsed: 203.955638ms
Oct 25 16:15:52.934: INFO: Pod "downward-api-fdc4f693-2efa-45cd-881b-52f5eb3f90c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.245107263s
STEP: Saw pod success
Oct 25 16:15:52.935: INFO: Pod "downward-api-fdc4f693-2efa-45cd-881b-52f5eb3f90c9" satisfied condition "Succeeded or Failed"
Oct 25 16:15:52.974: INFO: Trying to get logs from node bootstrap-e2e-minion-group-jzdr pod downward-api-fdc4f693-2efa-45cd-881b-52f5eb3f90c9 container dapi-container: <nil>
STEP: delete the pod
Oct 25 16:15:53.088: INFO: Waiting for pod downward-api-fdc4f693-2efa-45cd-881b-52f5eb3f90c9 to disappear
Oct 25 16:15:53.128: INFO: Pod downward-api-fdc4f693-2efa-45cd-881b-52f5eb3f90c9 no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:15:53.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8565" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":306,"completed":7,"skipped":159,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 8 lines ...
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:15:55.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3292" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":306,"completed":8,"skipped":170,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 68 lines ...
Oct 25 16:16:22.782: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2058"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:16:22.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2558" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":306,"completed":9,"skipped":179,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should print the output to logs [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:16:25.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4589" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":306,"completed":10,"skipped":190,"failed":0}
S
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 19 lines ...
Oct 25 16:16:34.781: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 25 16:16:35.099: INFO: Deleting pod test-dns-nameservers...
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:16:35.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8450" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":306,"completed":11,"skipped":191,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Oct 25 16:16:41.905: INFO: stderr: ""
Oct 25 16:16:41.905: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6154-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:16:46.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2768" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":306,"completed":12,"skipped":200,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-map-bf5b94c9-7f22-474f-bf10-acea159673a9
STEP: Creating a pod to test consume secrets
Oct 25 16:16:46.588: INFO: Waiting up to 5m0s for pod "pod-secrets-2f7a4c04-16a6-4137-86bf-843d3eba0712" in namespace "secrets-9772" to be "Succeeded or Failed"
Oct 25 16:16:46.630: INFO: Pod "pod-secrets-2f7a4c04-16a6-4137-86bf-843d3eba0712": Phase="Pending", Reason="", readiness=false. Elapsed: 42.431588ms
Oct 25 16:16:48.668: INFO: Pod "pod-secrets-2f7a4c04-16a6-4137-86bf-843d3eba0712": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.08050387s
STEP: Saw pod success
Oct 25 16:16:48.668: INFO: Pod "pod-secrets-2f7a4c04-16a6-4137-86bf-843d3eba0712" satisfied condition "Succeeded or Failed"
Oct 25 16:16:48.713: INFO: Trying to get logs from node bootstrap-e2e-minion-group-jzdr pod pod-secrets-2f7a4c04-16a6-4137-86bf-843d3eba0712 container secret-volume-test: <nil>
STEP: delete the pod
Oct 25 16:16:48.801: INFO: Waiting for pod pod-secrets-2f7a4c04-16a6-4137-86bf-843d3eba0712 to disappear
Oct 25 16:16:48.837: INFO: Pod pod-secrets-2f7a4c04-16a6-4137-86bf-843d3eba0712 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:16:48.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9772" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":306,"completed":13,"skipped":205,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] LimitRange
... skipping 31 lines ...
Oct 25 16:16:56.953: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:16:56.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-8221" for this suite.
•{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":306,"completed":14,"skipped":255,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 19 lines ...
Oct 25 16:18:30.976: INFO: Noticed Pod "taint-eviction-b2" gets evicted.
STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:18:31.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-multiple-pods-8278" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":306,"completed":15,"skipped":276,"failed":0}
SS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:18:31.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4188" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":306,"completed":16,"skipped":278,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 25 16:18:31.855: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir volume type on node default medium
Oct 25 16:18:32.080: INFO: Waiting up to 5m0s for pod "pod-54bc560d-062c-41f7-b361-8c5d9579c9c3" in namespace "emptydir-5078" to be "Succeeded or Failed"
Oct 25 16:18:32.119: INFO: Pod "pod-54bc560d-062c-41f7-b361-8c5d9579c9c3": Phase="Pending", Reason="", readiness=false. Elapsed: 38.756793ms
Oct 25 16:18:34.279: INFO: Pod "pod-54bc560d-062c-41f7-b361-8c5d9579c9c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.199488142s
STEP: Saw pod success
Oct 25 16:18:34.279: INFO: Pod "pod-54bc560d-062c-41f7-b361-8c5d9579c9c3" satisfied condition "Succeeded or Failed"
Oct 25 16:18:34.420: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-54bc560d-062c-41f7-b361-8c5d9579c9c3 container test-container: <nil>
STEP: delete the pod
Oct 25 16:18:34.901: INFO: Waiting for pod pod-54bc560d-062c-41f7-b361-8c5d9579c9c3 to disappear
Oct 25 16:18:34.939: INFO: Pod pod-54bc560d-062c-41f7-b361-8c5d9579c9c3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:18:34.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5078" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":17,"skipped":289,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 12 lines ...
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:18:35.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9544" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":306,"completed":18,"skipped":296,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:18:42.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7243" for this suite.
STEP: Destroying namespace "webhook-7243-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":306,"completed":19,"skipped":313,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 25 16:18:42.717: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-532fc085-59f0-43e2-85f4-e33a00d5dd17" in namespace "security-context-test-2887" to be "Succeeded or Failed"
Oct 25 16:18:42.755: INFO: Pod "alpine-nnp-false-532fc085-59f0-43e2-85f4-e33a00d5dd17": Phase="Pending", Reason="", readiness=false. Elapsed: 37.294786ms
Oct 25 16:18:44.791: INFO: Pod "alpine-nnp-false-532fc085-59f0-43e2-85f4-e33a00d5dd17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073936628s
Oct 25 16:18:46.842: INFO: Pod "alpine-nnp-false-532fc085-59f0-43e2-85f4-e33a00d5dd17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.124794977s
Oct 25 16:18:46.842: INFO: Pod "alpine-nnp-false-532fc085-59f0-43e2-85f4-e33a00d5dd17" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:18:46.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2887" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":20,"skipped":329,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 25 16:18:47.384: INFO: Waiting up to 5m0s for pod "busybox-user-65534-ee57e16f-23dd-4c9b-9a98-69fb4aa28640" in namespace "security-context-test-2040" to be "Succeeded or Failed"
Oct 25 16:18:47.478: INFO: Pod "busybox-user-65534-ee57e16f-23dd-4c9b-9a98-69fb4aa28640": Phase="Pending", Reason="", readiness=false. Elapsed: 93.861093ms
Oct 25 16:18:49.518: INFO: Pod "busybox-user-65534-ee57e16f-23dd-4c9b-9a98-69fb4aa28640": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.133673576s
Oct 25 16:18:49.518: INFO: Pod "busybox-user-65534-ee57e16f-23dd-4c9b-9a98-69fb4aa28640" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:18:49.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2040" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":21,"skipped":345,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 141 lines ...
Oct 25 16:19:12.002: INFO: stderr: ""
Oct 25 16:19:12.003: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:19:12.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5037" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":306,"completed":22,"skipped":423,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] [sig-node] Events
... skipping 16 lines ...
Oct 25 16:19:18.698: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:19:18.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-7609" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":306,"completed":23,"skipped":435,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 25 16:19:19.071: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d38e6634-b3fa-4a6e-a3a0-fbbb31e5c281" in namespace "downward-api-6545" to be "Succeeded or Failed"
Oct 25 16:19:19.121: INFO: Pod "downwardapi-volume-d38e6634-b3fa-4a6e-a3a0-fbbb31e5c281": Phase="Pending", Reason="", readiness=false. Elapsed: 50.434972ms
Oct 25 16:19:21.158: INFO: Pod "downwardapi-volume-d38e6634-b3fa-4a6e-a3a0-fbbb31e5c281": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.087452375s
STEP: Saw pod success
Oct 25 16:19:21.158: INFO: Pod "downwardapi-volume-d38e6634-b3fa-4a6e-a3a0-fbbb31e5c281" satisfied condition "Succeeded or Failed"
Oct 25 16:19:21.195: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downwardapi-volume-d38e6634-b3fa-4a6e-a3a0-fbbb31e5c281 container client-container: <nil>
STEP: delete the pod
Oct 25 16:19:21.281: INFO: Waiting for pod downwardapi-volume-d38e6634-b3fa-4a6e-a3a0-fbbb31e5c281 to disappear
Oct 25 16:19:21.318: INFO: Pod downwardapi-volume-d38e6634-b3fa-4a6e-a3a0-fbbb31e5c281 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:19:21.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6545" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":306,"completed":24,"skipped":458,"failed":0}
S
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:19:26.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3914" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":306,"completed":25,"skipped":459,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:19:43.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-446" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":306,"completed":26,"skipped":501,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 19 lines ...
Oct 25 16:20:20.963: INFO: stderr: ""
Oct 25 16:20:20.963: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:20:20.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4436" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":306,"completed":27,"skipped":512,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 11 lines ...
STEP: creating replication controller externalname-service in namespace services-8001
I1025 16:20:21.419425  144261 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8001, replica count: 2
Oct 25 16:20:24.520: INFO: Creating new exec pod
I1025 16:20:24.520123  144261 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 25 16:20:27.771: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-8001 exec execpodzx59l -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Oct 25 16:20:29.334: INFO: rc: 1
Oct 25 16:20:29.334: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-8001 exec execpodzx59l -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 25 16:20:30.334: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-8001 exec execpodzx59l -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Oct 25 16:20:31.909: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Oct 25 16:20:31.909: INFO: stdout: ""
Oct 25 16:20:31.910: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-8001 exec execpodzx59l -- /bin/sh -x -c nc -zv -t -w 2 10.0.232.94 80'
... skipping 15 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:20:34.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8001" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":306,"completed":28,"skipped":515,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Deployment
... skipping 20 lines ...
Oct 25 16:20:39.375: INFO: Pod "test-cleanup-deployment-685c4f8568-6tbqb" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-685c4f8568-6tbqb test-cleanup-deployment-685c4f8568- deployment-5803  96ebbbe0-e0d2-4bb2-93e2-8a930af45e9d 3096 0 2020-10-25 16:20:37 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:685c4f8568] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-685c4f8568 40edd954-8719-4373-9125-55a2b3760344 0xc002293dc7 0xc002293dc8}] []  [{kube-controller-manager Update v1 2020-10-25 16:20:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"40edd954-8719-4373-9125-55a2b3760344\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:20:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.1.19\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mvvs4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mvvs4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mvvs4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-jzdr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:20:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:20:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:20:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:20:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:10.64.1.19,StartTime:2020-10-25 16:20:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-25 16:20:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://2d035931b4fda2df95ca530faeae0e45fe91ae7cdbd33db6cbcf87d3e204d437,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.1.19,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:20:39.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5803" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":306,"completed":29,"skipped":530,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:20:51.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7438" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":306,"completed":30,"skipped":544,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-f35596a6-6ead-4150-b1b3-0e8cad269a03
STEP: Creating a pod to test consume secrets
Oct 25 16:20:51.669: INFO: Waiting up to 5m0s for pod "pod-secrets-771f3a5b-551c-4f0b-a82c-6c547baf7bb2" in namespace "secrets-51" to be "Succeeded or Failed"
Oct 25 16:20:51.712: INFO: Pod "pod-secrets-771f3a5b-551c-4f0b-a82c-6c547baf7bb2": Phase="Pending", Reason="", readiness=false. Elapsed: 42.902308ms
Oct 25 16:20:53.758: INFO: Pod "pod-secrets-771f3a5b-551c-4f0b-a82c-6c547baf7bb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.088229828s
STEP: Saw pod success
Oct 25 16:20:53.758: INFO: Pod "pod-secrets-771f3a5b-551c-4f0b-a82c-6c547baf7bb2" satisfied condition "Succeeded or Failed"
Oct 25 16:20:53.798: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-secrets-771f3a5b-551c-4f0b-a82c-6c547baf7bb2 container secret-volume-test: <nil>
STEP: delete the pod
Oct 25 16:20:54.105: INFO: Waiting for pod pod-secrets-771f3a5b-551c-4f0b-a82c-6c547baf7bb2 to disappear
Oct 25 16:20:54.140: INFO: Pod pod-secrets-771f3a5b-551c-4f0b-a82c-6c547baf7bb2 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:20:54.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-51" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":306,"completed":31,"skipped":552,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Oct 25 16:20:54.596: INFO: stderr: ""
Oct 25 16:20:54.596: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"20+\", GitVersion:\"v1.20.0-alpha.3.114+5935fcd704fe89\", GitCommit:\"5935fcd704fe89048776d02cf1ef4f939743c042\", GitTreeState:\"clean\", BuildDate:\"2020-10-24T03:47:00Z\", GoVersion:\"go1.15.2\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"20+\", GitVersion:\"v1.20.0-alpha.3.114+5935fcd704fe89\", GitCommit:\"5935fcd704fe89048776d02cf1ef4f939743c042\", GitTreeState:\"clean\", BuildDate:\"2020-10-24T03:47:00Z\", GoVersion:\"go1.15.2\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:20:54.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3955" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":306,"completed":32,"skipped":553,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:20:59.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8147" for this suite.
STEP: Destroying namespace "webhook-8147-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":306,"completed":33,"skipped":558,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:21:07.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6079" for this suite.
STEP: Destroying namespace "webhook-6079-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":306,"completed":34,"skipped":569,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 23 lines ...
Oct 25 16:21:22.492: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct 25 16:21:22.537: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:21:22.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2395" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":306,"completed":35,"skipped":655,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 11 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Oct 25 16:21:26.714: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2c7728d5-b4be-4616-8bd4-a7a56673b85b"
Oct 25 16:21:26.714: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2c7728d5-b4be-4616-8bd4-a7a56673b85b" in namespace "pods-2354" to be "terminated due to deadline exceeded"
Oct 25 16:21:26.750: INFO: Pod "pod-update-activedeadlineseconds-2c7728d5-b4be-4616-8bd4-a7a56673b85b": Phase="Running", Reason="", readiness=true. Elapsed: 36.428494ms
Oct 25 16:21:28.789: INFO: Pod "pod-update-activedeadlineseconds-2c7728d5-b4be-4616-8bd4-a7a56673b85b": Phase="Running", Reason="", readiness=true. Elapsed: 2.075039515s
Oct 25 16:21:30.832: INFO: Pod "pod-update-activedeadlineseconds-2c7728d5-b4be-4616-8bd4-a7a56673b85b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.117665019s
Oct 25 16:21:30.832: INFO: Pod "pod-update-activedeadlineseconds-2c7728d5-b4be-4616-8bd4-a7a56673b85b" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:21:30.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2354" for this suite.
•{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":306,"completed":36,"skipped":700,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Oct 25 16:21:33.622: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:21:33.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8657" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":306,"completed":37,"skipped":723,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 25 16:21:33.791: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129
[It] should retry creating failed daemon pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Oct 25 16:21:34.304: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Oct 25 16:21:34.365: INFO: Number of nodes with available pods: 0
Oct 25 16:21:34.365: INFO: Node bootstrap-e2e-minion-group-05w9 is running more than one daemon pod
... skipping 3 lines ...
Oct 25 16:21:36.411: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Oct 25 16:21:36.456: INFO: Number of nodes with available pods: 2
Oct 25 16:21:36.456: INFO: Node bootstrap-e2e-minion-group-05w9 is running more than one daemon pod
Oct 25 16:21:37.422: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Oct 25 16:21:37.528: INFO: Number of nodes with available pods: 3
Oct 25 16:21:37.528: INFO: Number of running nodes: 3, number of available pods: 3
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Oct 25 16:21:37.764: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Oct 25 16:21:37.829: INFO: Number of nodes with available pods: 2
Oct 25 16:21:37.829: INFO: Node bootstrap-e2e-minion-group-nmms is running more than one daemon pod
Oct 25 16:21:38.894: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Oct 25 16:21:38.995: INFO: Number of nodes with available pods: 2
Oct 25 16:21:38.995: INFO: Node bootstrap-e2e-minion-group-nmms is running more than one daemon pod
Oct 25 16:21:40.058: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Oct 25 16:21:40.250: INFO: Number of nodes with available pods: 3
Oct 25 16:21:40.250: INFO: Number of running nodes: 3, number of available pods: 3
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9769, will wait for the garbage collector to delete the pods
Oct 25 16:21:40.457: INFO: Deleting DaemonSet.extensions daemon-set took: 40.729576ms
Oct 25 16:21:41.157: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.271648ms
... skipping 4 lines ...
Oct 25 16:21:53.114: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"3527"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:21:53.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9769" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":306,"completed":38,"skipped":736,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 25 16:21:53.990: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74a21b42-0351-4bf6-bebc-a467e6b93af0" in namespace "projected-535" to be "Succeeded or Failed"
Oct 25 16:21:54.048: INFO: Pod "downwardapi-volume-74a21b42-0351-4bf6-bebc-a467e6b93af0": Phase="Pending", Reason="", readiness=false. Elapsed: 57.55791ms
Oct 25 16:21:56.085: INFO: Pod "downwardapi-volume-74a21b42-0351-4bf6-bebc-a467e6b93af0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.095109082s
STEP: Saw pod success
Oct 25 16:21:56.085: INFO: Pod "downwardapi-volume-74a21b42-0351-4bf6-bebc-a467e6b93af0" satisfied condition "Succeeded or Failed"
Oct 25 16:21:56.122: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downwardapi-volume-74a21b42-0351-4bf6-bebc-a467e6b93af0 container client-container: <nil>
STEP: delete the pod
Oct 25 16:21:56.211: INFO: Waiting for pod downwardapi-volume-74a21b42-0351-4bf6-bebc-a467e6b93af0 to disappear
Oct 25 16:21:56.247: INFO: Pod downwardapi-volume-74a21b42-0351-4bf6-bebc-a467e6b93af0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:21:56.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-535" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":306,"completed":39,"skipped":748,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 24 lines ...
Oct 25 16:21:57.521: INFO: created pod pod-service-account-nomountsa-nomountspec
Oct 25 16:21:57.521: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:21:57.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5542" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":306,"completed":40,"skipped":765,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 25 16:21:57.602: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct 25 16:21:57.827: INFO: Waiting up to 5m0s for pod "pod-c3879916-cb63-449e-b1e6-894375b5935f" in namespace "emptydir-9727" to be "Succeeded or Failed"
Oct 25 16:21:57.864: INFO: Pod "pod-c3879916-cb63-449e-b1e6-894375b5935f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.911938ms
Oct 25 16:21:59.914: INFO: Pod "pod-c3879916-cb63-449e-b1e6-894375b5935f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087410714s
Oct 25 16:22:02.068: INFO: Pod "pod-c3879916-cb63-449e-b1e6-894375b5935f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.241539582s
STEP: Saw pod success
Oct 25 16:22:02.069: INFO: Pod "pod-c3879916-cb63-449e-b1e6-894375b5935f" satisfied condition "Succeeded or Failed"
Oct 25 16:22:02.108: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-c3879916-cb63-449e-b1e6-894375b5935f container test-container: <nil>
STEP: delete the pod
Oct 25 16:22:02.338: INFO: Waiting for pod pod-c3879916-cb63-449e-b1e6-894375b5935f to disappear
Oct 25 16:22:02.376: INFO: Pod pod-c3879916-cb63-449e-b1e6-894375b5935f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:22:02.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9727" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":41,"skipped":775,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 25 16:22:02.455: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:22:10.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6003" for this suite.
•{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":306,"completed":42,"skipped":784,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 25 16:22:11.031: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0e0485f-1a5d-451b-bc1d-0d24f54af326" in namespace "projected-796" to be "Succeeded or Failed"
Oct 25 16:22:11.068: INFO: Pod "downwardapi-volume-d0e0485f-1a5d-451b-bc1d-0d24f54af326": Phase="Pending", Reason="", readiness=false. Elapsed: 36.69821ms
Oct 25 16:22:13.109: INFO: Pod "downwardapi-volume-d0e0485f-1a5d-451b-bc1d-0d24f54af326": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.077317604s
STEP: Saw pod success
Oct 25 16:22:13.109: INFO: Pod "downwardapi-volume-d0e0485f-1a5d-451b-bc1d-0d24f54af326" satisfied condition "Succeeded or Failed"
Oct 25 16:22:13.146: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downwardapi-volume-d0e0485f-1a5d-451b-bc1d-0d24f54af326 container client-container: <nil>
STEP: delete the pod
Oct 25 16:22:13.248: INFO: Waiting for pod downwardapi-volume-d0e0485f-1a5d-451b-bc1d-0d24f54af326 to disappear
Oct 25 16:22:13.288: INFO: Pod downwardapi-volume-d0e0485f-1a5d-451b-bc1d-0d24f54af326 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:22:13.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-796" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":306,"completed":43,"skipped":793,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Runtime
... skipping 2 lines ...
Oct 25 16:22:13.379: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Oct 25 16:22:15.848: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:22:15.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4474" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":306,"completed":44,"skipped":821,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 25 16:22:16.263: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11936087-afc6-49bb-9156-0f41c25c5a09" in namespace "projected-5406" to be "Succeeded or Failed"
Oct 25 16:22:16.309: INFO: Pod "downwardapi-volume-11936087-afc6-49bb-9156-0f41c25c5a09": Phase="Pending", Reason="", readiness=false. Elapsed: 46.368016ms
Oct 25 16:22:18.356: INFO: Pod "downwardapi-volume-11936087-afc6-49bb-9156-0f41c25c5a09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.093308844s
STEP: Saw pod success
Oct 25 16:22:18.356: INFO: Pod "downwardapi-volume-11936087-afc6-49bb-9156-0f41c25c5a09" satisfied condition "Succeeded or Failed"
Oct 25 16:22:18.432: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downwardapi-volume-11936087-afc6-49bb-9156-0f41c25c5a09 container client-container: <nil>
STEP: delete the pod
Oct 25 16:22:18.663: INFO: Waiting for pod downwardapi-volume-11936087-afc6-49bb-9156-0f41c25c5a09 to disappear
Oct 25 16:22:18.754: INFO: Pod downwardapi-volume-11936087-afc6-49bb-9156-0f41c25c5a09 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:22:18.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5406" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":306,"completed":45,"skipped":838,"failed":0}
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Networking
... skipping 39 lines ...
Oct 25 16:22:40.842: INFO: reached 10.64.3.19 after 0/1 tries
Oct 25 16:22:40.842: INFO: Going to retry 0 out of 3 pods....
[AfterEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:22:40.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4998" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":306,"completed":46,"skipped":841,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Oct 25 16:22:41.121: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3761 proxy --unix-socket=/tmp/kubectl-proxy-unix690975300/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:22:41.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3761" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":306,"completed":47,"skipped":860,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 25 16:22:41.285: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename webhook
... skipping 6 lines ...
STEP: Wait for the deployment to be ready
Oct 25 16:22:42.781: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739239762, loc:(*time.Location)(0x77697a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739239762, loc:(*time.Location)(0x77697a0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-6bd9446d55\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739239762, loc:(*time.Location)(0x77697a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739239762, loc:(*time.Location)(0x77697a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 25 16:22:44.821: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739239762, loc:(*time.Location)(0x77697a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739239762, loc:(*time.Location)(0x77697a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739239762, loc:(*time.Location)(0x77697a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739239762, loc:(*time.Location)(0x77697a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct 25 16:22:47.865: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:22:48.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6121" for this suite.
STEP: Destroying namespace "webhook-6121-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":306,"completed":48,"skipped":871,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Oct 25 16:22:49.351: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=kubectl-6225 proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:22:49.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6225" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":306,"completed":49,"skipped":874,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Service endpoints latency
... skipping 417 lines ...
Oct 25 16:23:02.703: INFO: 99 %ile: 2.716769281s
Oct 25 16:23:02.703: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:23:02.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-8818" for this suite.
•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":306,"completed":50,"skipped":900,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:23:19.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7005" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":306,"completed":51,"skipped":911,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-ac114b76-4e11-4739-b915-afb674720b01
STEP: Creating a pod to test consume secrets
Oct 25 16:23:20.437: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-edffc211-6a16-4e82-b056-be9e7ab9f86a" in namespace "projected-3441" to be "Succeeded or Failed"
Oct 25 16:23:20.914: INFO: Pod "pod-projected-secrets-edffc211-6a16-4e82-b056-be9e7ab9f86a": Phase="Pending", Reason="", readiness=false. Elapsed: 476.492292ms
Oct 25 16:23:22.994: INFO: Pod "pod-projected-secrets-edffc211-6a16-4e82-b056-be9e7ab9f86a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.557240152s
STEP: Saw pod success
Oct 25 16:23:22.995: INFO: Pod "pod-projected-secrets-edffc211-6a16-4e82-b056-be9e7ab9f86a" satisfied condition "Succeeded or Failed"
Oct 25 16:23:23.256: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-projected-secrets-edffc211-6a16-4e82-b056-be9e7ab9f86a container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 25 16:23:23.371: INFO: Waiting for pod pod-projected-secrets-edffc211-6a16-4e82-b056-be9e7ab9f86a to disappear
Oct 25 16:23:23.418: INFO: Pod pod-projected-secrets-edffc211-6a16-4e82-b056-be9e7ab9f86a no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:23:23.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3441" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":52,"skipped":912,"failed":0}
SSSSSSS
------------------------------
[sig-auth] Certificates API [Privileged:ClusterAdmin] 
  should support CSR API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
... skipping 25 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:23:25.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "certificates-4530" for this suite.
•{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":306,"completed":53,"skipped":919,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Deployment
... skipping 27 lines ...
Oct 25 16:23:34.305: INFO: Pod "test-rolling-update-deployment-6b6bf9df46-lwvmv" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46-lwvmv test-rolling-update-deployment-6b6bf9df46- deployment-3609  18da1958-118b-4d9e-80e9-da364bcc030a 5778 0 2020-10-25 16:23:31 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:6b6bf9df46] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-6b6bf9df46 36bdee28-386e-4c79-a694-0d9de94c7c66 0xc003bfb0c7 0xc003bfb0c8}] []  [{kube-controller-manager Update v1 2020-10-25 16:23:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36bdee28-386e-4c79-a694-0d9de94c7c66\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:23:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.2.58\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-49xvr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-49xvr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-49xvr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-05w9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:23:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:23:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:23:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:23:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.5,PodIP:10.64.2.58,StartTime:2020-10-25 16:23:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-25 16:23:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://d110910bd2320d15edab33f3a7db382886cd3923da7d87f76363d00987f0258e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.2.58,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:23:34.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3609" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":306,"completed":54,"skipped":930,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 25 16:23:34.883: INFO: Waiting up to 5m0s for pod "downwardapi-volume-40498870-706f-4035-982d-48b5a41c1767" in namespace "projected-2459" to be "Succeeded or Failed"
Oct 25 16:23:34.976: INFO: Pod "downwardapi-volume-40498870-706f-4035-982d-48b5a41c1767": Phase="Pending", Reason="", readiness=false. Elapsed: 93.237642ms
Oct 25 16:23:37.013: INFO: Pod "downwardapi-volume-40498870-706f-4035-982d-48b5a41c1767": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.130495698s
STEP: Saw pod success
Oct 25 16:23:37.013: INFO: Pod "downwardapi-volume-40498870-706f-4035-982d-48b5a41c1767" satisfied condition "Succeeded or Failed"
Oct 25 16:23:37.050: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downwardapi-volume-40498870-706f-4035-982d-48b5a41c1767 container client-container: <nil>
STEP: delete the pod
Oct 25 16:23:37.147: INFO: Waiting for pod downwardapi-volume-40498870-706f-4035-982d-48b5a41c1767 to disappear
Oct 25 16:23:37.183: INFO: Pod downwardapi-volume-40498870-706f-4035-982d-48b5a41c1767 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:23:37.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2459" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":55,"skipped":996,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:23:47.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1206" for this suite.
STEP: Destroying namespace "webhook-1206-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":306,"completed":56,"skipped":997,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-6940bc0a-4759-4a77-8243-74c2986150ae
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:23:53.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2894" for this suite.
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":57,"skipped":1002,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-configmap-79mw
STEP: Creating a pod to test atomic-volume-subpath
Oct 25 16:23:53.959: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-79mw" in namespace "subpath-7026" to be "Succeeded or Failed"
Oct 25 16:23:53.995: INFO: Pod "pod-subpath-test-configmap-79mw": Phase="Pending", Reason="", readiness=false. Elapsed: 36.335038ms
Oct 25 16:23:56.032: INFO: Pod "pod-subpath-test-configmap-79mw": Phase="Running", Reason="", readiness=true. Elapsed: 2.073054811s
Oct 25 16:23:58.094: INFO: Pod "pod-subpath-test-configmap-79mw": Phase="Running", Reason="", readiness=true. Elapsed: 4.135273531s
Oct 25 16:24:00.134: INFO: Pod "pod-subpath-test-configmap-79mw": Phase="Running", Reason="", readiness=true. Elapsed: 6.175068487s
Oct 25 16:24:02.171: INFO: Pod "pod-subpath-test-configmap-79mw": Phase="Running", Reason="", readiness=true. Elapsed: 8.212029202s
Oct 25 16:24:04.208: INFO: Pod "pod-subpath-test-configmap-79mw": Phase="Running", Reason="", readiness=true. Elapsed: 10.249268644s
Oct 25 16:24:06.260: INFO: Pod "pod-subpath-test-configmap-79mw": Phase="Running", Reason="", readiness=true. Elapsed: 12.301323405s
Oct 25 16:24:08.645: INFO: Pod "pod-subpath-test-configmap-79mw": Phase="Running", Reason="", readiness=true. Elapsed: 14.686470796s
Oct 25 16:24:10.682: INFO: Pod "pod-subpath-test-configmap-79mw": Phase="Running", Reason="", readiness=true. Elapsed: 16.722960928s
Oct 25 16:24:12.795: INFO: Pod "pod-subpath-test-configmap-79mw": Phase="Running", Reason="", readiness=true. Elapsed: 18.836177149s
Oct 25 16:24:14.921: INFO: Pod "pod-subpath-test-configmap-79mw": Phase="Running", Reason="", readiness=true. Elapsed: 20.962261546s
Oct 25 16:24:16.958: INFO: Pod "pod-subpath-test-configmap-79mw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.999300597s
STEP: Saw pod success
Oct 25 16:24:16.958: INFO: Pod "pod-subpath-test-configmap-79mw" satisfied condition "Succeeded or Failed"
Oct 25 16:24:16.994: INFO: Trying to get logs from node bootstrap-e2e-minion-group-jzdr pod pod-subpath-test-configmap-79mw container test-container-subpath-configmap-79mw: <nil>
STEP: delete the pod
Oct 25 16:24:17.100: INFO: Waiting for pod pod-subpath-test-configmap-79mw to disappear
Oct 25 16:24:17.136: INFO: Pod pod-subpath-test-configmap-79mw no longer exists
STEP: Deleting pod pod-subpath-test-configmap-79mw
Oct 25 16:24:17.136: INFO: Deleting pod "pod-subpath-test-configmap-79mw" in namespace "subpath-7026"
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:24:17.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7026" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":306,"completed":58,"skipped":1027,"failed":0}

------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 25 16:24:17.434: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:24:18.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8801" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":306,"completed":59,"skipped":1027,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-36f8c438-d905-4567-9d36-10d0f4f45017
STEP: Creating a pod to test consume secrets
Oct 25 16:24:20.377: INFO: Waiting up to 5m0s for pod "pod-secrets-e6003b55-0ba4-46c0-a911-d6d42b988560" in namespace "secrets-2776" to be "Succeeded or Failed"
Oct 25 16:24:20.538: INFO: Pod "pod-secrets-e6003b55-0ba4-46c0-a911-d6d42b988560": Phase="Pending", Reason="", readiness=false. Elapsed: 161.828601ms
Oct 25 16:24:22.580: INFO: Pod "pod-secrets-e6003b55-0ba4-46c0-a911-d6d42b988560": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.203640111s
STEP: Saw pod success
Oct 25 16:24:22.580: INFO: Pod "pod-secrets-e6003b55-0ba4-46c0-a911-d6d42b988560" satisfied condition "Succeeded or Failed"
Oct 25 16:24:22.625: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-secrets-e6003b55-0ba4-46c0-a911-d6d42b988560 container secret-volume-test: <nil>
STEP: delete the pod
Oct 25 16:24:22.745: INFO: Waiting for pod pod-secrets-e6003b55-0ba4-46c0-a911-d6d42b988560 to disappear
Oct 25 16:24:22.792: INFO: Pod pod-secrets-e6003b55-0ba4-46c0-a911-d6d42b988560 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:24:22.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2776" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":60,"skipped":1032,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Oct 25 16:24:25.860: INFO: Successfully updated pod "labelsupdateb97cf39f-8bb8-43b4-8cc6-db787450138e"
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:24:30.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8302" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":306,"completed":61,"skipped":1041,"failed":0}
SS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Oct 25 16:24:30.091: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
Oct 25 16:24:30.317: INFO: Waiting up to 5m0s for pod "downward-api-4948b55f-3191-4f03-abfe-26859f500e15" in namespace "downward-api-9327" to be "Succeeded or Failed"
Oct 25 16:24:30.361: INFO: Pod "downward-api-4948b55f-3191-4f03-abfe-26859f500e15": Phase="Pending", Reason="", readiness=false. Elapsed: 44.693853ms
Oct 25 16:24:32.399: INFO: Pod "downward-api-4948b55f-3191-4f03-abfe-26859f500e15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.081960809s
STEP: Saw pod success
Oct 25 16:24:32.399: INFO: Pod "downward-api-4948b55f-3191-4f03-abfe-26859f500e15" satisfied condition "Succeeded or Failed"
Oct 25 16:24:32.435: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downward-api-4948b55f-3191-4f03-abfe-26859f500e15 container dapi-container: <nil>
STEP: delete the pod
Oct 25 16:24:32.522: INFO: Waiting for pod downward-api-4948b55f-3191-4f03-abfe-26859f500e15 to disappear
Oct 25 16:24:32.560: INFO: Pod downward-api-4948b55f-3191-4f03-abfe-26859f500e15 no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:24:32.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9327" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":306,"completed":62,"skipped":1043,"failed":0}
SS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Job
... skipping 19 lines ...
Oct 25 16:24:38.224: INFO: Pod "adopt-release-lh4t9": Phase="Running", Reason="", readiness=true. Elapsed: 39.985492ms
Oct 25 16:24:38.224: INFO: Pod "adopt-release-lh4t9" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:24:38.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4045" for this suite.
•{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":306,"completed":63,"skipped":1045,"failed":0}
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Oct 25 16:24:38.305: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test substitution in container's command
Oct 25 16:24:38.529: INFO: Waiting up to 5m0s for pod "var-expansion-2c0fa19a-6b24-4549-8395-033324fab1db" in namespace "var-expansion-8953" to be "Succeeded or Failed"
Oct 25 16:24:38.565: INFO: Pod "var-expansion-2c0fa19a-6b24-4549-8395-033324fab1db": Phase="Pending", Reason="", readiness=false. Elapsed: 35.927568ms
Oct 25 16:24:40.602: INFO: Pod "var-expansion-2c0fa19a-6b24-4549-8395-033324fab1db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.073050925s
STEP: Saw pod success
Oct 25 16:24:40.602: INFO: Pod "var-expansion-2c0fa19a-6b24-4549-8395-033324fab1db" satisfied condition "Succeeded or Failed"
Oct 25 16:24:40.638: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod var-expansion-2c0fa19a-6b24-4549-8395-033324fab1db container dapi-container: <nil>
STEP: delete the pod
Oct 25 16:24:40.724: INFO: Waiting for pod var-expansion-2c0fa19a-6b24-4549-8395-033324fab1db to disappear
Oct 25 16:24:40.760: INFO: Pod var-expansion-2c0fa19a-6b24-4549-8395-033324fab1db no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:24:40.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8953" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":306,"completed":64,"skipped":1047,"failed":0}
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-9f4dee91-0d4a-462e-b1e6-2645549b6130
STEP: Creating a pod to test consume configMaps
Oct 25 16:24:41.108: INFO: Waiting up to 5m0s for pod "pod-configmaps-d5799126-736d-4027-9c71-bd6814b08ed6" in namespace "configmap-6391" to be "Succeeded or Failed"
Oct 25 16:24:41.145: INFO: Pod "pod-configmaps-d5799126-736d-4027-9c71-bd6814b08ed6": Phase="Pending", Reason="", readiness=false. Elapsed: 36.110156ms
Oct 25 16:24:43.324: INFO: Pod "pod-configmaps-d5799126-736d-4027-9c71-bd6814b08ed6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.215967438s
STEP: Saw pod success
Oct 25 16:24:43.325: INFO: Pod "pod-configmaps-d5799126-736d-4027-9c71-bd6814b08ed6" satisfied condition "Succeeded or Failed"
Oct 25 16:24:43.438: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-configmaps-d5799126-736d-4027-9c71-bd6814b08ed6 container configmap-volume-test: <nil>
STEP: delete the pod
Oct 25 16:24:43.965: INFO: Waiting for pod pod-configmaps-d5799126-736d-4027-9c71-bd6814b08ed6 to disappear
Oct 25 16:24:44.106: INFO: Pod pod-configmaps-d5799126-736d-4027-9c71-bd6814b08ed6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:24:44.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6391" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":306,"completed":65,"skipped":1054,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:25:01.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2331" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":306,"completed":66,"skipped":1061,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 25 16:25:02.075: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:25:03.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3413" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":306,"completed":67,"skipped":1073,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Oct 25 16:25:03.710: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test substitution in container's args
Oct 25 16:25:05.787: INFO: Waiting up to 5m0s for pod "var-expansion-53396fd9-28e9-4946-a186-002b29cfa5f0" in namespace "var-expansion-8948" to be "Succeeded or Failed"
Oct 25 16:25:05.968: INFO: Pod "var-expansion-53396fd9-28e9-4946-a186-002b29cfa5f0": Phase="Pending", Reason="", readiness=false. Elapsed: 180.558833ms
Oct 25 16:25:08.005: INFO: Pod "var-expansion-53396fd9-28e9-4946-a186-002b29cfa5f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217329784s
Oct 25 16:25:10.044: INFO: Pod "var-expansion-53396fd9-28e9-4946-a186-002b29cfa5f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.256449666s
STEP: Saw pod success
Oct 25 16:25:10.044: INFO: Pod "var-expansion-53396fd9-28e9-4946-a186-002b29cfa5f0" satisfied condition "Succeeded or Failed"
Oct 25 16:25:10.081: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod var-expansion-53396fd9-28e9-4946-a186-002b29cfa5f0 container dapi-container: <nil>
STEP: delete the pod
Oct 25 16:25:10.182: INFO: Waiting for pod var-expansion-53396fd9-28e9-4946-a186-002b29cfa5f0 to disappear
Oct 25 16:25:10.219: INFO: Pod var-expansion-53396fd9-28e9-4946-a186-002b29cfa5f0 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:25:10.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8948" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":306,"completed":68,"skipped":1085,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Oct 25 16:25:10.586: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 25 16:25:17.198: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:25:34.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4175" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":306,"completed":69,"skipped":1088,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 25 16:25:34.819: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct 25 16:25:35.043: INFO: Waiting up to 5m0s for pod "pod-5f920e37-d2b0-40f0-b6e7-93af5f95adb6" in namespace "emptydir-1430" to be "Succeeded or Failed"
Oct 25 16:25:35.083: INFO: Pod "pod-5f920e37-d2b0-40f0-b6e7-93af5f95adb6": Phase="Pending", Reason="", readiness=false. Elapsed: 39.440787ms
Oct 25 16:25:37.281: INFO: Pod "pod-5f920e37-d2b0-40f0-b6e7-93af5f95adb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.237148622s
STEP: Saw pod success
Oct 25 16:25:37.281: INFO: Pod "pod-5f920e37-d2b0-40f0-b6e7-93af5f95adb6" satisfied condition "Succeeded or Failed"
Oct 25 16:25:37.317: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-5f920e37-d2b0-40f0-b6e7-93af5f95adb6 container test-container: <nil>
STEP: delete the pod
Oct 25 16:25:37.402: INFO: Waiting for pod pod-5f920e37-d2b0-40f0-b6e7-93af5f95adb6 to disappear
Oct 25 16:25:37.438: INFO: Pod pod-5f920e37-d2b0-40f0-b6e7-93af5f95adb6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:25:37.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1430" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":70,"skipped":1091,"failed":0}
SS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-ae7c1d00-7e05-496d-87f1-94ed2404a8bc
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:27:01.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6207" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":71,"skipped":1093,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:27:06.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6845" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":306,"completed":72,"skipped":1097,"failed":0}
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Networking
... skipping 36 lines ...
Oct 25 16:27:30.287: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 25 16:27:30.566: INFO: Found all 1 expected endpoints: [netserver-2]
[AfterEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:27:30.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5999" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":73,"skipped":1102,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Oct 25 16:27:55.712: INFO: Restart count of pod container-probe-2601/liveness-7569366d-680a-43c3-8bd9-921a53e4f770 is now 1 (22.572928129s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:27:55.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2601" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":306,"completed":74,"skipped":1110,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 25 16:27:56.216: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f76324b3-9cd7-418a-bf9f-59d52c2dabf4" in namespace "downward-api-4723" to be "Succeeded or Failed"
Oct 25 16:27:56.273: INFO: Pod "downwardapi-volume-f76324b3-9cd7-418a-bf9f-59d52c2dabf4": Phase="Pending", Reason="", readiness=false. Elapsed: 56.312781ms
Oct 25 16:27:58.314: INFO: Pod "downwardapi-volume-f76324b3-9cd7-418a-bf9f-59d52c2dabf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.097891025s
STEP: Saw pod success
Oct 25 16:27:58.314: INFO: Pod "downwardapi-volume-f76324b3-9cd7-418a-bf9f-59d52c2dabf4" satisfied condition "Succeeded or Failed"
Oct 25 16:27:58.357: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downwardapi-volume-f76324b3-9cd7-418a-bf9f-59d52c2dabf4 container client-container: <nil>
STEP: delete the pod
Oct 25 16:27:58.469: INFO: Waiting for pod downwardapi-volume-f76324b3-9cd7-418a-bf9f-59d52c2dabf4 to disappear
Oct 25 16:27:58.514: INFO: Pod downwardapi-volume-f76324b3-9cd7-418a-bf9f-59d52c2dabf4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:27:58.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4723" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":306,"completed":75,"skipped":1167,"failed":0}
SSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should test the lifecycle of a ReplicationController [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicationController
... skipping 26 lines ...
STEP: deleting ReplicationControllers by collection
STEP: waiting for ReplicationController to have a DELETED watchEvent
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:28:14.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2450" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":306,"completed":76,"skipped":1174,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Events 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Events
... skipping 11 lines ...
STEP: deleting the test event
STEP: listing all events in all namespaces
[AfterEach] [sig-api-machinery] Events
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:28:14.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-7542" for this suite.
•{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":306,"completed":77,"skipped":1192,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 25 lines ...
Oct 25 16:28:31.642: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Oct 25 16:28:31.679: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:28:31.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7827" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":306,"completed":78,"skipped":1210,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:28:38.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4574" for this suite.
STEP: Destroying namespace "webhook-4574-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":306,"completed":79,"skipped":1221,"failed":0}

------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 25 16:28:39.476: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:28:40.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9338" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":306,"completed":80,"skipped":1221,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:28:49.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5685" for this suite.
STEP: Destroying namespace "webhook-5685-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":306,"completed":81,"skipped":1231,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 26 lines ...
Oct 25 16:28:53.518: INFO: Selector matched 1 pods for map[app:agnhost]
Oct 25 16:28:53.518: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:28:53.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9534" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":306,"completed":82,"skipped":1251,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 25 16:28:54.064: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9834dd19-a6ec-4cff-b208-85c0dee149a1" in namespace "downward-api-7342" to be "Succeeded or Failed"
Oct 25 16:28:54.162: INFO: Pod "downwardapi-volume-9834dd19-a6ec-4cff-b208-85c0dee149a1": Phase="Pending", Reason="", readiness=false. Elapsed: 97.771344ms
Oct 25 16:28:56.199: INFO: Pod "downwardapi-volume-9834dd19-a6ec-4cff-b208-85c0dee149a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.134366491s
STEP: Saw pod success
Oct 25 16:28:56.199: INFO: Pod "downwardapi-volume-9834dd19-a6ec-4cff-b208-85c0dee149a1" satisfied condition "Succeeded or Failed"
Oct 25 16:28:56.236: INFO: Trying to get logs from node bootstrap-e2e-minion-group-jzdr pod downwardapi-volume-9834dd19-a6ec-4cff-b208-85c0dee149a1 container client-container: <nil>
STEP: delete the pod
Oct 25 16:28:56.350: INFO: Waiting for pod downwardapi-volume-9834dd19-a6ec-4cff-b208-85c0dee149a1 to disappear
Oct 25 16:28:56.387: INFO: Pod downwardapi-volume-9834dd19-a6ec-4cff-b208-85c0dee149a1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:28:56.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7342" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":306,"completed":83,"skipped":1259,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 60 lines ...
Oct 25 16:29:04.211: INFO: stderr: ""
Oct 25 16:29:04.211: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:29:04.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3391" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":306,"completed":84,"skipped":1265,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 25 16:29:04.520: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-c63dc1cc-9e20-4238-9369-26fd10b56112" in namespace "security-context-test-6752" to be "Succeeded or Failed"
Oct 25 16:29:04.557: INFO: Pod "busybox-privileged-false-c63dc1cc-9e20-4238-9369-26fd10b56112": Phase="Pending", Reason="", readiness=false. Elapsed: 36.987866ms
Oct 25 16:29:06.597: INFO: Pod "busybox-privileged-false-c63dc1cc-9e20-4238-9369-26fd10b56112": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.076390359s
Oct 25 16:29:06.597: INFO: Pod "busybox-privileged-false-c63dc1cc-9e20-4238-9369-26fd10b56112" satisfied condition "Succeeded or Failed"
Oct 25 16:29:06.641: INFO: Got logs for pod "busybox-privileged-false-c63dc1cc-9e20-4238-9369-26fd10b56112": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:29:06.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6752" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":85,"skipped":1281,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 163 lines ...
Oct 25 16:29:52.858: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7450"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:29:53.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2101" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":306,"completed":86,"skipped":1305,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 20 lines ...
Oct 25 16:29:55.398: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Oct 25 16:29:55.398: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=kubectl-9556 describe pod agnhost-primary-mx8kj'
Oct 25 16:29:55.680: INFO: stderr: ""
Oct 25 16:29:55.680: INFO: stdout: "Name:         agnhost-primary-mx8kj\nNamespace:    kubectl-9556\nPriority:     0\nNode:         bootstrap-e2e-minion-group-05w9/10.138.0.5\nStart Time:   Sun, 25 Oct 2020 16:29:53 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           10.64.2.85\nIPs:\n  IP:           10.64.2.85\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://994b047635f2395d99cdb147e3a8752eb9bf60998078319d2b3e79c1dd834b51\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.21\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 25 Oct 2020 16:29:54 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-dnrh4 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-dnrh4:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-dnrh4\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  2s    default-scheduler  Successfully assigned kubectl-9556/agnhost-primary-mx8kj to bootstrap-e2e-minion-group-05w9\n  Normal  Pulled     1s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n  Normal  Created    1s    kubelet            Created container agnhost-primary\n  Normal  Started    1s    kubelet            Started container agnhost-primary\n"
Oct 25 16:29:55.680: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=kubectl-9556 describe rc agnhost-primary'
Oct 25 16:29:56.021: INFO: stderr: ""
Oct 25 16:29:56.021: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-9556\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.21\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: agnhost-primary-mx8kj\n"
Oct 25 16:29:56.021: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=kubectl-9556 describe service agnhost-primary'
Oct 25 16:29:56.339: INFO: stderr: ""
Oct 25 16:29:56.339: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-9556\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP:                10.0.15.209\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.64.2.85:6379\nSession Affinity:  None\nEvents:            <none>\n"
Oct 25 16:29:56.377: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=kubectl-9556 describe node bootstrap-e2e-master'
Oct 25 16:29:56.798: INFO: stderr: ""
Oct 25 16:29:56.798: INFO: stdout: "Name:               bootstrap-e2e-master\nRoles:              <none>\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=n1-standard-1\n                    beta.kubernetes.io/os=linux\n                    cloud.google.com/metadata-proxy-ready=true\n                    failure-domain.beta.kubernetes.io/region=us-west1\n                    failure-domain.beta.kubernetes.io/zone=us-west1-b\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=bootstrap-e2e-master\n                    kubernetes.io/os=linux\n                    node.kubernetes.io/instance-type=n1-standard-1\n                    topology.kubernetes.io/region=us-west1\n                    topology.kubernetes.io/zone=us-west1-b\nAnnotations:        node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 25 Oct 2020 16:09:08 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\n                    node.kubernetes.io/unschedulable:NoSchedule\nUnschedulable:      true\nLease:\n  HolderIdentity:  bootstrap-e2e-master\n  AcquireTime:     <unset>\n  RenewTime:       Sun, 25 Oct 2020 16:29:52 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sun, 25 Oct 2020 16:09:26 +0000   Sun, 25 Oct 2020 16:09:26 +0000   RouteCreated                 RouteController created a route\n  MemoryPressure       False   Sun, 25 Oct 2020 16:29:41 +0000   Sun, 25 Oct 2020 16:09:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sun, 25 Oct 2020 16:29:41 +0000   Sun, 25 Oct 2020 16:09:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sun, 25 Oct 2020 16:29:41 +0000   Sun, 25 Oct 2020 16:09:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sun, 25 Oct 2020 16:29:41 +0000   Sun, 25 Oct 2020 16:09:18 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   10.138.0.2\n  ExternalIP:   35.247.44.183\n  InternalDNS:  bootstrap-e2e-master.c.k8s-gci-gce-ingress1-5.internal\n  Hostname:     bootstrap-e2e-master.c.k8s-gci-gce-ingress1-5.internal\nCapacity:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          16293736Ki\n  hugepages-2Mi:              0\n  memory:                     3776180Ki\n  pods:                       110\nAllocatable:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          15016307073\n  hugepages-2Mi:              0\n  memory:                     3520180Ki\n  pods:                       110\nSystem Info:\n  Machine ID:                 82f263e47bf24642066ee88c08de0b99\n  System UUID:                82f263e4-7bf2-4642-066e-e88c08de0b99\n  Boot ID:                    f51f2a5e-640e-4df6-8861-2cdfe4e7d897\n  Kernel Version:             5.4.49+\n  OS Image:                   Container-Optimized OS from Google\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.1\n  Kubelet Version:            v1.20.0-alpha.3.114+5935fcd704fe89\n  Kube-Proxy Version:         v1.20.0-alpha.3.114+5935fcd704fe89\nPodCIDR:                      10.64.0.0/24\nPodCIDRs:                     10.64.0.0/24\nProviderID:                   gce://k8s-gci-gce-ingress1-5/us-west1-b/bootstrap-e2e-master\nNon-terminated Pods:          (8 in total)\n  Namespace                   Name                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                            ------------  ----------  ---------------  -------------  ---\n  kube-system                 etcd-server-bootstrap-e2e-master                200m (20%)    0 (0%)      0 (0%)           0 (0%)         20m\n  kube-system                 etcd-server-events-bootstrap-e2e-master         100m (10%)    0 (0%)      0 (0%)           0 (0%)         20m\n  kube-system                 kube-addon-manager-bootstrap-e2e-master         5m (0%)       0 (0%)      50Mi (1%)        0 (0%)         19m\n  kube-system                 kube-apiserver-bootstrap-e2e-master             250m (25%)    0 (0%)      0 (0%)           0 (0%)         20m\n  kube-system                 kube-controller-manager-bootstrap-e2e-master    200m (20%)    0 (0%)      0 (0%)           0 (0%)         19m\n  kube-system                 kube-scheduler-bootstrap-e2e-master             75m (7%)      0 (0%)      0 (0%)           0 (0%)         20m\n  kube-system                 l7-lb-controller-bootstrap-e2e-master           10m (1%)      0 (0%)      50Mi (1%)        0 (0%)         20m\n  kube-system                 metadata-proxy-v0.1-vg2qd                       32m (3%)      32m (3%)    45Mi (1%)        45Mi (1%)      20m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                   Requests    Limits\n  --------                   --------    ------\n  cpu                        872m (87%)  32m (3%)\n  memory                     145Mi (4%)  45Mi (1%)\n  ephemeral-storage          0 (0%)      0 (0%)\n  hugepages-2Mi              0 (0%)      0 (0%)\n  attachable-volumes-gce-pd  0           0\nEvents:                      <none>\n"
Oct 25 16:29:56.799: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=kubectl-9556 describe namespace kubectl-9556'
Oct 25 16:29:57.112: INFO: stderr: ""
Oct 25 16:29:57.112: INFO: stdout: "Name:         kubectl-9556\nLabels:       e2e-framework=kubectl\n              e2e-run=4e8f4773-3937-4db4-ae14-b12d14b9afe6\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:29:57.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9556" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":306,"completed":87,"skipped":1313,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 9 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:29:57.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3531" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":306,"completed":88,"skipped":1372,"failed":0}
S
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicationController
... skipping 10 lines ...
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:30:03.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4275" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":306,"completed":89,"skipped":1373,"failed":0}
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-configmap-qtjq
STEP: Creating a pod to test atomic-volume-subpath
Oct 25 16:30:03.753: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qtjq" in namespace "subpath-3033" to be "Succeeded or Failed"
Oct 25 16:30:03.790: INFO: Pod "pod-subpath-test-configmap-qtjq": Phase="Pending", Reason="", readiness=false. Elapsed: 36.756483ms
Oct 25 16:30:05.830: INFO: Pod "pod-subpath-test-configmap-qtjq": Phase="Running", Reason="", readiness=true. Elapsed: 2.076541142s
Oct 25 16:30:07.920: INFO: Pod "pod-subpath-test-configmap-qtjq": Phase="Running", Reason="", readiness=true. Elapsed: 4.166888759s
Oct 25 16:30:09.962: INFO: Pod "pod-subpath-test-configmap-qtjq": Phase="Running", Reason="", readiness=true. Elapsed: 6.20873047s
Oct 25 16:30:12.041: INFO: Pod "pod-subpath-test-configmap-qtjq": Phase="Running", Reason="", readiness=true. Elapsed: 8.287392079s
Oct 25 16:30:14.105: INFO: Pod "pod-subpath-test-configmap-qtjq": Phase="Running", Reason="", readiness=true. Elapsed: 10.351000993s
Oct 25 16:30:16.142: INFO: Pod "pod-subpath-test-configmap-qtjq": Phase="Running", Reason="", readiness=true. Elapsed: 12.387978208s
Oct 25 16:30:18.180: INFO: Pod "pod-subpath-test-configmap-qtjq": Phase="Running", Reason="", readiness=true. Elapsed: 14.426163448s
Oct 25 16:30:20.251: INFO: Pod "pod-subpath-test-configmap-qtjq": Phase="Running", Reason="", readiness=true. Elapsed: 16.497058829s
Oct 25 16:30:22.289: INFO: Pod "pod-subpath-test-configmap-qtjq": Phase="Running", Reason="", readiness=true. Elapsed: 18.53563768s
Oct 25 16:30:24.327: INFO: Pod "pod-subpath-test-configmap-qtjq": Phase="Running", Reason="", readiness=true. Elapsed: 20.573572442s
Oct 25 16:30:26.368: INFO: Pod "pod-subpath-test-configmap-qtjq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.614165352s
STEP: Saw pod success
Oct 25 16:30:26.368: INFO: Pod "pod-subpath-test-configmap-qtjq" satisfied condition "Succeeded or Failed"
Oct 25 16:30:26.407: INFO: Trying to get logs from node bootstrap-e2e-minion-group-jzdr pod pod-subpath-test-configmap-qtjq container test-container-subpath-configmap-qtjq: <nil>
STEP: delete the pod
Oct 25 16:30:26.526: INFO: Waiting for pod pod-subpath-test-configmap-qtjq to disappear
Oct 25 16:30:26.563: INFO: Pod pod-subpath-test-configmap-qtjq no longer exists
STEP: Deleting pod pod-subpath-test-configmap-qtjq
Oct 25 16:30:26.563: INFO: Deleting pod "pod-subpath-test-configmap-qtjq" in namespace "subpath-3033"
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:30:26.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3033" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":306,"completed":90,"skipped":1376,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:30:29.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5348" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":306,"completed":91,"skipped":1389,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 25 16:30:29.993: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be246f06-81e8-461c-b484-965673eafd63" in namespace "projected-9315" to be "Succeeded or Failed"
Oct 25 16:30:30.030: INFO: Pod "downwardapi-volume-be246f06-81e8-461c-b484-965673eafd63": Phase="Pending", Reason="", readiness=false. Elapsed: 37.081745ms
Oct 25 16:30:32.075: INFO: Pod "downwardapi-volume-be246f06-81e8-461c-b484-965673eafd63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.081546619s
STEP: Saw pod success
Oct 25 16:30:32.075: INFO: Pod "downwardapi-volume-be246f06-81e8-461c-b484-965673eafd63" satisfied condition "Succeeded or Failed"
Oct 25 16:30:32.120: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downwardapi-volume-be246f06-81e8-461c-b484-965673eafd63 container client-container: <nil>
STEP: delete the pod
Oct 25 16:30:32.263: INFO: Waiting for pod downwardapi-volume-be246f06-81e8-461c-b484-965673eafd63 to disappear
Oct 25 16:30:32.299: INFO: Pod downwardapi-volume-be246f06-81e8-461c-b484-965673eafd63 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:30:32.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9315" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":306,"completed":92,"skipped":1408,"failed":0}
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-projected-mxkf
STEP: Creating a pod to test atomic-volume-subpath
Oct 25 16:30:32.701: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-mxkf" in namespace "subpath-8763" to be "Succeeded or Failed"
Oct 25 16:30:32.758: INFO: Pod "pod-subpath-test-projected-mxkf": Phase="Pending", Reason="", readiness=false. Elapsed: 57.388157ms
Oct 25 16:30:34.813: INFO: Pod "pod-subpath-test-projected-mxkf": Phase="Running", Reason="", readiness=true. Elapsed: 2.111894573s
Oct 25 16:30:36.852: INFO: Pod "pod-subpath-test-projected-mxkf": Phase="Running", Reason="", readiness=true. Elapsed: 4.151359295s
Oct 25 16:30:38.891: INFO: Pod "pod-subpath-test-projected-mxkf": Phase="Running", Reason="", readiness=true. Elapsed: 6.189549341s
Oct 25 16:30:40.929: INFO: Pod "pod-subpath-test-projected-mxkf": Phase="Running", Reason="", readiness=true. Elapsed: 8.227424615s
Oct 25 16:30:43.015: INFO: Pod "pod-subpath-test-projected-mxkf": Phase="Running", Reason="", readiness=true. Elapsed: 10.314151604s
Oct 25 16:30:45.052: INFO: Pod "pod-subpath-test-projected-mxkf": Phase="Running", Reason="", readiness=true. Elapsed: 12.350863565s
Oct 25 16:30:47.089: INFO: Pod "pod-subpath-test-projected-mxkf": Phase="Running", Reason="", readiness=true. Elapsed: 14.388279409s
Oct 25 16:30:49.151: INFO: Pod "pod-subpath-test-projected-mxkf": Phase="Running", Reason="", readiness=true. Elapsed: 16.449646098s
Oct 25 16:30:51.196: INFO: Pod "pod-subpath-test-projected-mxkf": Phase="Running", Reason="", readiness=true. Elapsed: 18.495010662s
Oct 25 16:30:53.233: INFO: Pod "pod-subpath-test-projected-mxkf": Phase="Running", Reason="", readiness=true. Elapsed: 20.532248381s
Oct 25 16:30:55.271: INFO: Pod "pod-subpath-test-projected-mxkf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.569604075s
STEP: Saw pod success
Oct 25 16:30:55.271: INFO: Pod "pod-subpath-test-projected-mxkf" satisfied condition "Succeeded or Failed"
Oct 25 16:30:55.308: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-subpath-test-projected-mxkf container test-container-subpath-projected-mxkf: <nil>
STEP: delete the pod
Oct 25 16:30:55.397: INFO: Waiting for pod pod-subpath-test-projected-mxkf to disappear
Oct 25 16:30:55.434: INFO: Pod pod-subpath-test-projected-mxkf no longer exists
STEP: Deleting pod pod-subpath-test-projected-mxkf
Oct 25 16:30:55.434: INFO: Deleting pod "pod-subpath-test-projected-mxkf" in namespace "subpath-8763"
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:30:55.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8763" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":306,"completed":93,"skipped":1413,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 25 16:30:55.552: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: creating the pod
Oct 25 16:30:55.741: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:31:02.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1859" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":306,"completed":94,"skipped":1479,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 25 16:31:02.462: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69f0ea33-c675-4968-a33b-8a566282a3af" in namespace "downward-api-5346" to be "Succeeded or Failed"
Oct 25 16:31:02.501: INFO: Pod "downwardapi-volume-69f0ea33-c675-4968-a33b-8a566282a3af": Phase="Pending", Reason="", readiness=false. Elapsed: 38.741593ms
Oct 25 16:31:04.580: INFO: Pod "downwardapi-volume-69f0ea33-c675-4968-a33b-8a566282a3af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.117823187s
STEP: Saw pod success
Oct 25 16:31:04.580: INFO: Pod "downwardapi-volume-69f0ea33-c675-4968-a33b-8a566282a3af" satisfied condition "Succeeded or Failed"
Oct 25 16:31:04.658: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downwardapi-volume-69f0ea33-c675-4968-a33b-8a566282a3af container client-container: <nil>
STEP: delete the pod
Oct 25 16:31:04.866: INFO: Waiting for pod downwardapi-volume-69f0ea33-c675-4968-a33b-8a566282a3af to disappear
Oct 25 16:31:04.916: INFO: Pod downwardapi-volume-69f0ea33-c675-4968-a33b-8a566282a3af no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:31:04.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5346" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":306,"completed":95,"skipped":1490,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 25 16:31:05.346: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f1c59b1-baaa-4ff3-a293-4ee944e656f2" in namespace "downward-api-4882" to be "Succeeded or Failed"
Oct 25 16:31:05.530: INFO: Pod "downwardapi-volume-2f1c59b1-baaa-4ff3-a293-4ee944e656f2": Phase="Pending", Reason="", readiness=false. Elapsed: 183.952668ms
Oct 25 16:31:07.567: INFO: Pod "downwardapi-volume-2f1c59b1-baaa-4ff3-a293-4ee944e656f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.220677813s
STEP: Saw pod success
Oct 25 16:31:07.567: INFO: Pod "downwardapi-volume-2f1c59b1-baaa-4ff3-a293-4ee944e656f2" satisfied condition "Succeeded or Failed"
Oct 25 16:31:07.603: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downwardapi-volume-2f1c59b1-baaa-4ff3-a293-4ee944e656f2 container client-container: <nil>
STEP: delete the pod
Oct 25 16:31:07.692: INFO: Waiting for pod downwardapi-volume-2f1c59b1-baaa-4ff3-a293-4ee944e656f2 to disappear
Oct 25 16:31:07.729: INFO: Pod downwardapi-volume-2f1c59b1-baaa-4ff3-a293-4ee944e656f2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:31:07.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4882" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":306,"completed":96,"skipped":1500,"failed":0}
SSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap configmap-4134/configmap-test-970902af-ee9d-4db5-af5a-6601d7c6794a
STEP: Creating a pod to test consume configMaps
Oct 25 16:31:08.075: INFO: Waiting up to 5m0s for pod "pod-configmaps-6fe8bbd0-4041-449f-9161-e4b633e93744" in namespace "configmap-4134" to be "Succeeded or Failed"
Oct 25 16:31:08.112: INFO: Pod "pod-configmaps-6fe8bbd0-4041-449f-9161-e4b633e93744": Phase="Pending", Reason="", readiness=false. Elapsed: 36.187887ms
Oct 25 16:31:10.150: INFO: Pod "pod-configmaps-6fe8bbd0-4041-449f-9161-e4b633e93744": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.074269452s
STEP: Saw pod success
Oct 25 16:31:10.150: INFO: Pod "pod-configmaps-6fe8bbd0-4041-449f-9161-e4b633e93744" satisfied condition "Succeeded or Failed"
Oct 25 16:31:10.189: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-configmaps-6fe8bbd0-4041-449f-9161-e4b633e93744 container env-test: <nil>
STEP: delete the pod
Oct 25 16:31:10.534: INFO: Waiting for pod pod-configmaps-6fe8bbd0-4041-449f-9161-e4b633e93744 to disappear
Oct 25 16:31:10.594: INFO: Pod pod-configmaps-6fe8bbd0-4041-449f-9161-e4b633e93744 no longer exists
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:31:10.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4134" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":306,"completed":97,"skipped":1505,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 11 lines ...
STEP: creating replication controller affinity-nodeport-transition in namespace services-5725
I1025 16:31:11.267040  144261 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-5725, replica count: 3
I1025 16:31:14.317851  144261 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 25 16:31:14.432: INFO: Creating new exec pod
Oct 25 16:31:17.831: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-5725 exec execpod-affinity7s8kz -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
Oct 25 16:31:19.508: INFO: rc: 1
Oct 25 16:31:19.508: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-5725 exec execpod-affinity7s8kz -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-nodeport-transition 80
nc: connect to affinity-nodeport-transition port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 25 16:31:20.509: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-5725 exec execpod-affinity7s8kz -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
Oct 25 16:31:22.068: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n"
Oct 25 16:31:22.068: INFO: stdout: ""
Oct 25 16:31:22.069: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-5725 exec execpod-affinity7s8kz -- /bin/sh -x -c nc -zv -t -w 2 10.0.135.248 80'
... skipping 75 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:32:13.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5725" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":306,"completed":98,"skipped":1522,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 25 16:32:13.246: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name secret-emptykey-test-b451a4db-a068-4343-a5d0-c20e33f33657
[AfterEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:32:13.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1581" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":306,"completed":99,"skipped":1535,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Oct 25 16:32:13.751: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test override command
Oct 25 16:32:13.979: INFO: Waiting up to 5m0s for pod "client-containers-2581231e-3e9d-484c-8b0b-dc6d1654d077" in namespace "containers-9648" to be "Succeeded or Failed"
Oct 25 16:32:14.016: INFO: Pod "client-containers-2581231e-3e9d-484c-8b0b-dc6d1654d077": Phase="Pending", Reason="", readiness=false. Elapsed: 37.156399ms
Oct 25 16:32:16.053: INFO: Pod "client-containers-2581231e-3e9d-484c-8b0b-dc6d1654d077": Phase="Running", Reason="", readiness=true. Elapsed: 2.074092427s
Oct 25 16:32:18.091: INFO: Pod "client-containers-2581231e-3e9d-484c-8b0b-dc6d1654d077": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.112568346s
STEP: Saw pod success
Oct 25 16:32:18.091: INFO: Pod "client-containers-2581231e-3e9d-484c-8b0b-dc6d1654d077" satisfied condition "Succeeded or Failed"
Oct 25 16:32:18.128: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod client-containers-2581231e-3e9d-484c-8b0b-dc6d1654d077 container agnhost-container: <nil>
STEP: delete the pod
Oct 25 16:32:18.325: INFO: Waiting for pod client-containers-2581231e-3e9d-484c-8b0b-dc6d1654d077 to disappear
Oct 25 16:32:18.389: INFO: Pod client-containers-2581231e-3e9d-484c-8b0b-dc6d1654d077 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:32:18.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9648" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":306,"completed":100,"skipped":1590,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:32:31.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1452" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":306,"completed":101,"skipped":1597,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-map-618c5ada-a6bd-40c8-9dbe-ad2b9de1adb6
STEP: Creating a pod to test consume secrets
Oct 25 16:32:32.069: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b7c808b7-fb34-4c40-b682-7cda8dd3694d" in namespace "projected-5336" to be "Succeeded or Failed"
Oct 25 16:32:32.121: INFO: Pod "pod-projected-secrets-b7c808b7-fb34-4c40-b682-7cda8dd3694d": Phase="Pending", Reason="", readiness=false. Elapsed: 51.524136ms
Oct 25 16:32:34.158: INFO: Pod "pod-projected-secrets-b7c808b7-fb34-4c40-b682-7cda8dd3694d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.088732409s
STEP: Saw pod success
Oct 25 16:32:34.158: INFO: Pod "pod-projected-secrets-b7c808b7-fb34-4c40-b682-7cda8dd3694d" satisfied condition "Succeeded or Failed"
Oct 25 16:32:34.195: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-projected-secrets-b7c808b7-fb34-4c40-b682-7cda8dd3694d container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 25 16:32:34.301: INFO: Waiting for pod pod-projected-secrets-b7c808b7-fb34-4c40-b682-7cda8dd3694d to disappear
Oct 25 16:32:34.337: INFO: Pod pod-projected-secrets-b7c808b7-fb34-4c40-b682-7cda8dd3694d no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:32:34.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5336" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":102,"skipped":1598,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Events 
  should delete a collection of events [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Events
... skipping 14 lines ...
STEP: check that the list of events matches the requested quantity
Oct 25 16:32:34.830: INFO: requesting list of events to confirm quantity
[AfterEach] [sig-api-machinery] Events
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:32:34.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6081" for this suite.
•{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":306,"completed":103,"skipped":1604,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Deployment
... skipping 44 lines ...
Oct 25 16:32:57.475: INFO: Pod "test-rollover-deployment-668db69979-79jbf" is available:
&Pod{ObjectMeta:{test-rollover-deployment-668db69979-79jbf test-rollover-deployment-668db69979- deployment-2131  b729c896-568b-4a08-9b82-860c88f9e130 8204 0 2020-10-25 16:32:45 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:668db69979] map[] [{apps/v1 ReplicaSet test-rollover-deployment-668db69979 a4d338a5-21cf-4e91-978e-23dfeca779e8 0xc0024cb6f7 0xc0024cb6f8}] []  [{kube-controller-manager Update v1 2020-10-25 16:32:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4d338a5-21cf-4e91-978e-23dfeca779e8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:32:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.2.101\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qxjch,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qxjch,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qxjch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-05w9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:32:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:32:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:32:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:32:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.5,PodIP:10.64.2.101,StartTime:2020-10-25 16:32:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-25 16:32:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://1b049e2cd64deb1c1879e3a741f5c99f5362f76bc677093f008d833b414b9c68,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.2.101,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:32:57.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2131" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":306,"completed":104,"skipped":1649,"failed":0}
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Oct 25 16:32:57.554: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test override all
Oct 25 16:32:57.818: INFO: Waiting up to 5m0s for pod "client-containers-a2338ea6-2b41-4a75-b5a0-ca62120e9ab8" in namespace "containers-8886" to be "Succeeded or Failed"
Oct 25 16:32:57.854: INFO: Pod "client-containers-a2338ea6-2b41-4a75-b5a0-ca62120e9ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 36.133351ms
Oct 25 16:33:00.022: INFO: Pod "client-containers-a2338ea6-2b41-4a75-b5a0-ca62120e9ab8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.203511373s
STEP: Saw pod success
Oct 25 16:33:00.022: INFO: Pod "client-containers-a2338ea6-2b41-4a75-b5a0-ca62120e9ab8" satisfied condition "Succeeded or Failed"
Oct 25 16:33:00.058: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod client-containers-a2338ea6-2b41-4a75-b5a0-ca62120e9ab8 container agnhost-container: <nil>
STEP: delete the pod
Oct 25 16:33:00.147: INFO: Waiting for pod client-containers-a2338ea6-2b41-4a75-b5a0-ca62120e9ab8 to disappear
Oct 25 16:33:00.183: INFO: Pod client-containers-a2338ea6-2b41-4a75-b5a0-ca62120e9ab8 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:33:00.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8886" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":306,"completed":105,"skipped":1650,"failed":0}
SS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-ae7502ed-3a41-4763-8289-93f4177ba4f2
STEP: Creating a pod to test consume secrets
Oct 25 16:33:00.708: INFO: Waiting up to 5m0s for pod "pod-secrets-0a614bb6-2b5c-4594-9f9b-3145a79c56b2" in namespace "secrets-498" to be "Succeeded or Failed"
Oct 25 16:33:00.757: INFO: Pod "pod-secrets-0a614bb6-2b5c-4594-9f9b-3145a79c56b2": Phase="Pending", Reason="", readiness=false. Elapsed: 48.785584ms
Oct 25 16:33:02.806: INFO: Pod "pod-secrets-0a614bb6-2b5c-4594-9f9b-3145a79c56b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.097607309s
STEP: Saw pod success
Oct 25 16:33:02.806: INFO: Pod "pod-secrets-0a614bb6-2b5c-4594-9f9b-3145a79c56b2" satisfied condition "Succeeded or Failed"
Oct 25 16:33:02.856: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-secrets-0a614bb6-2b5c-4594-9f9b-3145a79c56b2 container secret-volume-test: <nil>
STEP: delete the pod
Oct 25 16:33:03.034: INFO: Waiting for pod pod-secrets-0a614bb6-2b5c-4594-9f9b-3145a79c56b2 to disappear
Oct 25 16:33:03.072: INFO: Pod pod-secrets-0a614bb6-2b5c-4594-9f9b-3145a79c56b2 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:33:03.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-498" for this suite.
STEP: Destroying namespace "secret-namespace-26" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":306,"completed":106,"skipped":1652,"failed":0}
SSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:33:21.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5663" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":306,"completed":107,"skipped":1659,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:33:26.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1089" for this suite.
STEP: Destroying namespace "webhook-1089-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":306,"completed":108,"skipped":1703,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
Oct 25 16:33:45.288: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 25 16:33:50.830: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:34:09.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6140" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":306,"completed":109,"skipped":1710,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 13 lines ...
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:34:26.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7202" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":306,"completed":110,"skipped":1739,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 25 16:34:27.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1da82567-125a-4a28-b544-635ee013a3c4" in namespace "projected-9937" to be "Succeeded or Failed"
Oct 25 16:34:27.207: INFO: Pod "downwardapi-volume-1da82567-125a-4a28-b544-635ee013a3c4": Phase="Pending", Reason="", readiness=false. Elapsed: 46.211828ms
Oct 25 16:34:29.247: INFO: Pod "downwardapi-volume-1da82567-125a-4a28-b544-635ee013a3c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.085543686s
STEP: Saw pod success
Oct 25 16:34:29.247: INFO: Pod "downwardapi-volume-1da82567-125a-4a28-b544-635ee013a3c4" satisfied condition "Succeeded or Failed"
Oct 25 16:34:29.287: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downwardapi-volume-1da82567-125a-4a28-b544-635ee013a3c4 container client-container: <nil>
STEP: delete the pod
Oct 25 16:34:29.379: INFO: Waiting for pod downwardapi-volume-1da82567-125a-4a28-b544-635ee013a3c4 to disappear
Oct 25 16:34:29.417: INFO: Pod downwardapi-volume-1da82567-125a-4a28-b544-635ee013a3c4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:34:29.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9937" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":306,"completed":111,"skipped":1762,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Watchers
... skipping 17 lines ...
Oct 25 16:34:30.003: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8700  27907d41-cb54-4b13-b3b5-694bf79a8163 8664 0 2020-10-25 16:34:29 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-10-25 16:34:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Oct 25 16:34:30.003: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8700  27907d41-cb54-4b13-b3b5-694bf79a8163 8665 0 2020-10-25 16:34:29 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-10-25 16:34:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:34:30.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8700" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":306,"completed":112,"skipped":1785,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-7a472f7b-ef27-481c-ab5c-0ef4a35b5d21
STEP: Creating a pod to test consume configMaps
Oct 25 16:34:30.402: INFO: Waiting up to 5m0s for pod "pod-configmaps-97128a84-c959-4a75-8457-2fc7ce9b07d7" in namespace "configmap-6744" to be "Succeeded or Failed"
Oct 25 16:34:30.621: INFO: Pod "pod-configmaps-97128a84-c959-4a75-8457-2fc7ce9b07d7": Phase="Pending", Reason="", readiness=false. Elapsed: 219.541493ms
Oct 25 16:34:32.659: INFO: Pod "pod-configmaps-97128a84-c959-4a75-8457-2fc7ce9b07d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.257541431s
STEP: Saw pod success
Oct 25 16:34:32.659: INFO: Pod "pod-configmaps-97128a84-c959-4a75-8457-2fc7ce9b07d7" satisfied condition "Succeeded or Failed"
Oct 25 16:34:32.697: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-configmaps-97128a84-c959-4a75-8457-2fc7ce9b07d7 container configmap-volume-test: <nil>
STEP: delete the pod
Oct 25 16:34:32.797: INFO: Waiting for pod pod-configmaps-97128a84-c959-4a75-8457-2fc7ce9b07d7 to disappear
Oct 25 16:34:32.833: INFO: Pod pod-configmaps-97128a84-c959-4a75-8457-2fc7ce9b07d7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:34:32.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6744" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":306,"completed":113,"skipped":1794,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: creating secret secrets-3199/secret-test-c09fb258-b263-4ea7-b1e6-1dbf1a0e7e22
STEP: Creating a pod to test consume secrets
Oct 25 16:34:33.190: INFO: Waiting up to 5m0s for pod "pod-configmaps-622ded25-0539-4c80-97e4-b097f6dbe041" in namespace "secrets-3199" to be "Succeeded or Failed"
Oct 25 16:34:33.230: INFO: Pod "pod-configmaps-622ded25-0539-4c80-97e4-b097f6dbe041": Phase="Pending", Reason="", readiness=false. Elapsed: 40.241654ms
Oct 25 16:34:35.268: INFO: Pod "pod-configmaps-622ded25-0539-4c80-97e4-b097f6dbe041": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.077867968s
STEP: Saw pod success
Oct 25 16:34:35.268: INFO: Pod "pod-configmaps-622ded25-0539-4c80-97e4-b097f6dbe041" satisfied condition "Succeeded or Failed"
Oct 25 16:34:35.313: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-configmaps-622ded25-0539-4c80-97e4-b097f6dbe041 container env-test: <nil>
STEP: delete the pod
Oct 25 16:34:35.440: INFO: Waiting for pod pod-configmaps-622ded25-0539-4c80-97e4-b097f6dbe041 to disappear
Oct 25 16:34:35.482: INFO: Pod pod-configmaps-622ded25-0539-4c80-97e4-b097f6dbe041 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:34:35.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3199" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":306,"completed":114,"skipped":1804,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 220 lines ...
Oct 25 16:36:32.762: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"9015"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:36:32.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3949" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":306,"completed":115,"skipped":1817,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
... skipping 42 lines ...
Oct 25 16:36:42.604: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 25 16:36:42.892: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:36:42.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-7339" for this suite.
•{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":116,"skipped":1836,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Watchers
... skipping 7 lines ...
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:36:50.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-651" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":306,"completed":117,"skipped":1901,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 11 lines ...
STEP: creating replication controller externalname-service in namespace services-7985
I1025 16:36:50.590748  144261 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7985, replica count: 2
Oct 25 16:36:53.641: INFO: Creating new exec pod
I1025 16:36:53.641211  144261 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 25 16:36:56.847: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-7985 exec execpoddv2lb -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Oct 25 16:36:58.501: INFO: rc: 1
Oct 25 16:36:58.501: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-7985 exec execpoddv2lb -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 25 16:36:59.501: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-7985 exec execpoddv2lb -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Oct 25 16:37:01.253: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Oct 25 16:37:01.253: INFO: stdout: ""
Oct 25 16:37:01.254: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-7985 exec execpoddv2lb -- /bin/sh -x -c nc -zv -t -w 2 10.0.219.47 80'
... skipping 3 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:37:01.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7985" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":306,"completed":118,"skipped":1924,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-b78bd880-594e-468f-ba13-6ee0bc11cbde
STEP: Creating a pod to test consume secrets
Oct 25 16:37:02.184: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4d2ee6e6-cce1-4f1f-9069-ed7c002558b3" in namespace "projected-8581" to be "Succeeded or Failed"
Oct 25 16:37:02.239: INFO: Pod "pod-projected-secrets-4d2ee6e6-cce1-4f1f-9069-ed7c002558b3": Phase="Pending", Reason="", readiness=false. Elapsed: 55.238708ms
Oct 25 16:37:04.288: INFO: Pod "pod-projected-secrets-4d2ee6e6-cce1-4f1f-9069-ed7c002558b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.103750174s
STEP: Saw pod success
Oct 25 16:37:04.288: INFO: Pod "pod-projected-secrets-4d2ee6e6-cce1-4f1f-9069-ed7c002558b3" satisfied condition "Succeeded or Failed"
Oct 25 16:37:04.329: INFO: Trying to get logs from node bootstrap-e2e-minion-group-jzdr pod pod-projected-secrets-4d2ee6e6-cce1-4f1f-9069-ed7c002558b3 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 25 16:37:04.458: INFO: Waiting for pod pod-projected-secrets-4d2ee6e6-cce1-4f1f-9069-ed7c002558b3 to disappear
Oct 25 16:37:04.494: INFO: Pod pod-projected-secrets-4d2ee6e6-cce1-4f1f-9069-ed7c002558b3 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:37:04.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8581" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":306,"completed":119,"skipped":1944,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 25 16:37:05.158: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:37:20.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5236" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":306,"completed":120,"skipped":1952,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 40 lines ...
Oct 25 16:37:33.984: INFO: Deleting pod "simpletest-rc-to-be-deleted-jd98v" in namespace "gc-3112"
Oct 25 16:37:34.048: INFO: Deleting pod "simpletest-rc-to-be-deleted-pwpzq" in namespace "gc-3112"
[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:37:34.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3112" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":306,"completed":121,"skipped":1981,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-instrumentation] Events API 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-instrumentation] Events API
... skipping 20 lines ...
STEP: listing events in all namespaces
STEP: listing events in test namespace
[AfterEach] [sig-instrumentation] Events API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:37:35.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9449" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":306,"completed":122,"skipped":2009,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 25 16:37:35.216: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on node default medium
Oct 25 16:37:35.489: INFO: Waiting up to 5m0s for pod "pod-8af3fad5-1371-4622-9e8b-d4c39c952f52" in namespace "emptydir-3765" to be "Succeeded or Failed"
Oct 25 16:37:35.527: INFO: Pod "pod-8af3fad5-1371-4622-9e8b-d4c39c952f52": Phase="Pending", Reason="", readiness=false. Elapsed: 37.340523ms
Oct 25 16:37:37.564: INFO: Pod "pod-8af3fad5-1371-4622-9e8b-d4c39c952f52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.074591258s
STEP: Saw pod success
Oct 25 16:37:37.564: INFO: Pod "pod-8af3fad5-1371-4622-9e8b-d4c39c952f52" satisfied condition "Succeeded or Failed"
Oct 25 16:37:37.600: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-8af3fad5-1371-4622-9e8b-d4c39c952f52 container test-container: <nil>
STEP: delete the pod
Oct 25 16:37:37.713: INFO: Waiting for pod pod-8af3fad5-1371-4622-9e8b-d4c39c952f52 to disappear
Oct 25 16:37:37.749: INFO: Pod pod-8af3fad5-1371-4622-9e8b-d4c39c952f52 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:37:37.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3765" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":123,"skipped":2025,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 25 16:37:38.055: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e7ab88b-c980-4ea2-af24-6dcaf0313d9c" in namespace "downward-api-397" to be "Succeeded or Failed"
Oct 25 16:37:38.091: INFO: Pod "downwardapi-volume-2e7ab88b-c980-4ea2-af24-6dcaf0313d9c": Phase="Pending", Reason="", readiness=false. Elapsed: 36.24363ms
Oct 25 16:37:40.129: INFO: Pod "downwardapi-volume-2e7ab88b-c980-4ea2-af24-6dcaf0313d9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.073893271s
STEP: Saw pod success
Oct 25 16:37:40.129: INFO: Pod "downwardapi-volume-2e7ab88b-c980-4ea2-af24-6dcaf0313d9c" satisfied condition "Succeeded or Failed"
Oct 25 16:37:40.171: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downwardapi-volume-2e7ab88b-c980-4ea2-af24-6dcaf0313d9c container client-container: <nil>
STEP: delete the pod
Oct 25 16:37:40.353: INFO: Waiting for pod downwardapi-volume-2e7ab88b-c980-4ea2-af24-6dcaf0313d9c to disappear
Oct 25 16:37:40.399: INFO: Pod downwardapi-volume-2e7ab88b-c980-4ea2-af24-6dcaf0313d9c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:37:40.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-397" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":124,"skipped":2059,"failed":0}
SSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected combined
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-projected-all-test-volume-c4421d4d-9257-4d03-98b8-05ff62d328d1
STEP: Creating secret with name secret-projected-all-test-volume-337f94d5-2bcd-4c1f-89c1-2a6b6ebefd46
STEP: Creating a pod to test Check all projections for projected volume plugin
Oct 25 16:37:41.001: INFO: Waiting up to 5m0s for pod "projected-volume-2b2762a4-0b98-41dc-a857-ebc1eface55f" in namespace "projected-6441" to be "Succeeded or Failed"
Oct 25 16:37:41.054: INFO: Pod "projected-volume-2b2762a4-0b98-41dc-a857-ebc1eface55f": Phase="Pending", Reason="", readiness=false. Elapsed: 52.80969ms
Oct 25 16:37:43.100: INFO: Pod "projected-volume-2b2762a4-0b98-41dc-a857-ebc1eface55f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099472467s
Oct 25 16:37:45.137: INFO: Pod "projected-volume-2b2762a4-0b98-41dc-a857-ebc1eface55f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.136543207s
STEP: Saw pod success
Oct 25 16:37:45.137: INFO: Pod "projected-volume-2b2762a4-0b98-41dc-a857-ebc1eface55f" satisfied condition "Succeeded or Failed"
Oct 25 16:37:45.175: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod projected-volume-2b2762a4-0b98-41dc-a857-ebc1eface55f container projected-all-volume-test: <nil>
STEP: delete the pod
Oct 25 16:37:45.296: INFO: Waiting for pod projected-volume-2b2762a4-0b98-41dc-a857-ebc1eface55f to disappear
Oct 25 16:37:45.334: INFO: Pod projected-volume-2b2762a4-0b98-41dc-a857-ebc1eface55f no longer exists
[AfterEach] [sig-storage] Projected combined
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:37:45.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6441" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":306,"completed":125,"skipped":2062,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicationController
... skipping 13 lines ...
Oct 25 16:37:47.101: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:37:47.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8799" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":306,"completed":126,"skipped":2072,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Oct 25 16:37:49.695: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:37:49.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7984" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":306,"completed":127,"skipped":2114,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 36 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:37:59.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1843" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":306,"completed":128,"skipped":2137,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 25 16:38:00.028: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1a5776d-7b3f-4ea4-9505-5766db5f679f" in namespace "downward-api-8082" to be "Succeeded or Failed"
Oct 25 16:38:00.076: INFO: Pod "downwardapi-volume-d1a5776d-7b3f-4ea4-9505-5766db5f679f": Phase="Pending", Reason="", readiness=false. Elapsed: 47.912651ms
Oct 25 16:38:02.121: INFO: Pod "downwardapi-volume-d1a5776d-7b3f-4ea4-9505-5766db5f679f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.092641793s
STEP: Saw pod success
Oct 25 16:38:02.121: INFO: Pod "downwardapi-volume-d1a5776d-7b3f-4ea4-9505-5766db5f679f" satisfied condition "Succeeded or Failed"
Oct 25 16:38:02.180: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downwardapi-volume-d1a5776d-7b3f-4ea4-9505-5766db5f679f container client-container: <nil>
STEP: delete the pod
Oct 25 16:38:02.382: INFO: Waiting for pod downwardapi-volume-d1a5776d-7b3f-4ea4-9505-5766db5f679f to disappear
Oct 25 16:38:02.422: INFO: Pod downwardapi-volume-d1a5776d-7b3f-4ea4-9505-5766db5f679f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:38:02.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8082" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":306,"completed":129,"skipped":2169,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes control plane services is included in cluster-info  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Oct 25 16:38:02.991: INFO: stderr: ""
Oct 25 16:38:02.991: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://35.247.44.183\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:38:02.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-320" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":306,"completed":130,"skipped":2176,"failed":0}

------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a volume subpath [sig-storage] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Oct 25 16:38:03.072: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test substitution in volume subpath
Oct 25 16:38:03.302: INFO: Waiting up to 5m0s for pod "var-expansion-53c42d52-4e98-4a05-a884-2ae97073051b" in namespace "var-expansion-132" to be "Succeeded or Failed"
Oct 25 16:38:03.338: INFO: Pod "var-expansion-53c42d52-4e98-4a05-a884-2ae97073051b": Phase="Pending", Reason="", readiness=false. Elapsed: 35.840929ms
Oct 25 16:38:05.377: INFO: Pod "var-expansion-53c42d52-4e98-4a05-a884-2ae97073051b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.075308869s
STEP: Saw pod success
Oct 25 16:38:05.377: INFO: Pod "var-expansion-53c42d52-4e98-4a05-a884-2ae97073051b" satisfied condition "Succeeded or Failed"
Oct 25 16:38:05.415: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod var-expansion-53c42d52-4e98-4a05-a884-2ae97073051b container dapi-container: <nil>
STEP: delete the pod
Oct 25 16:38:05.537: INFO: Waiting for pod var-expansion-53c42d52-4e98-4a05-a884-2ae97073051b to disappear
Oct 25 16:38:05.577: INFO: Pod var-expansion-53c42d52-4e98-4a05-a884-2ae97073051b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:38:05.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-132" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":306,"completed":131,"skipped":2176,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:38:13.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7852" for this suite.
STEP: Destroying namespace "webhook-7852-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":306,"completed":132,"skipped":2185,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Deployment
... skipping 40 lines ...
&Pod{ObjectMeta:{webserver-deployment-795d758f88-5f6f7 webserver-deployment-795d758f88- deployment-39  9750dea5-d946-499f-95ce-9e6f631083e7 10082 0 2020-10-25 16:38:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ce72da73-8277-42cd-91b8-4e2369215780 0xc004b3cf17 0xc004b3cf18}] []  [{kube-controller-manager Update v1 2020-10-25 16:38:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ce72da73-8277-42cd-91b8-4e2369215780\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:38:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kn9rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kn9rr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kn9rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-05w9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.5,PodIP:,StartTime:2020-10-25 16:38:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 25 16:38:25.894: INFO: Pod "webserver-deployment-795d758f88-9hxp7" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-9hxp7 webserver-deployment-795d758f88- deployment-39  ef0c14e6-30db-4083-98cb-49a9780f3764 10094 0 2020-10-25 16:38:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ce72da73-8277-42cd-91b8-4e2369215780 0xc004b3d0b0 0xc004b3d0b1}] []  [{kube-controller-manager Update v1 2020-10-25 16:38:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ce72da73-8277-42cd-91b8-4e2369215780\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:38:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kn9rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kn9rr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kn9rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-05w9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.5,PodIP:,StartTime:2020-10-25 16:38:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 25 16:38:25.894: INFO: Pod "webserver-deployment-795d758f88-h8kx7" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-h8kx7 webserver-deployment-795d758f88- deployment-39  ec5a2fc7-5e4d-4aea-a33f-eb6a641ad4f9 10077 0 2020-10-25 16:38:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ce72da73-8277-42cd-91b8-4e2369215780 0xc004b3d240 0xc004b3d241}] []  [{kube-controller-manager Update v1 2020-10-25 16:38:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ce72da73-8277-42cd-91b8-4e2369215780\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:38:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kn9rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kn9rr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kn9rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-nmms,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.4,PodIP:,StartTime:2020-10-25 16:38:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 25 16:38:25.894: INFO: Pod "webserver-deployment-795d758f88-jcn9p" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-jcn9p webserver-deployment-795d758f88- deployment-39  c50eee19-1c1a-439f-9c83-0b3432cb95c1 10105 0 2020-10-25 16:38:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ce72da73-8277-42cd-91b8-4e2369215780 0xc004b3d3d0 0xc004b3d3d1}] []  [{kube-controller-manager Update v1 2020-10-25 16:38:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ce72da73-8277-42cd-91b8-4e2369215780\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:38:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.2.133\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kn9rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kn9rr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kn9rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-05w9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.5,PodIP:10.64.2.133,StartTime:2020-10-25 16:38:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.2.133,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 25 16:38:25.894: INFO: Pod "webserver-deployment-795d758f88-kfpn5" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-kfpn5 webserver-deployment-795d758f88- deployment-39  d1a433b4-2c1e-42dd-a243-49d3286fc590 10095 0 2020-10-25 16:38:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ce72da73-8277-42cd-91b8-4e2369215780 0xc004b3d590 0xc004b3d591}] []  [{kube-controller-manager Update v1 2020-10-25 16:38:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ce72da73-8277-42cd-91b8-4e2369215780\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:38:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kn9rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kn9rr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kn9rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-jzdr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:,StartTime:2020-10-25 16:38:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 25 16:38:25.894: INFO: Pod "webserver-deployment-795d758f88-rghql" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-rghql webserver-deployment-795d758f88- deployment-39  3cd774dc-1202-4373-88df-0a33b0d5e79f 10097 0 2020-10-25 16:38:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ce72da73-8277-42cd-91b8-4e2369215780 0xc004b3d720 0xc004b3d721}] []  [{kube-controller-manager Update v1 2020-10-25 16:38:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ce72da73-8277-42cd-91b8-4e2369215780\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:38:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kn9rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kn9rr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kn9rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-05w9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.5,PodIP:,StartTime:2020-10-25 16:38:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 25 16:38:25.895: INFO: Pod "webserver-deployment-795d758f88-rl24b" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-rl24b webserver-deployment-795d758f88- deployment-39  fd46edf6-dc5f-4ef8-a415-8c277fd94c9b 10096 0 2020-10-25 16:38:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ce72da73-8277-42cd-91b8-4e2369215780 0xc004b3d8b0 0xc004b3d8b1}] []  [{kube-controller-manager Update v1 2020-10-25 16:38:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ce72da73-8277-42cd-91b8-4e2369215780\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:38:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kn9rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kn9rr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kn9rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-nmms,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.4,PodIP:,StartTime:2020-10-25 16:38:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 25 16:38:25.895: INFO: Pod "webserver-deployment-795d758f88-s2z2r" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-s2z2r webserver-deployment-795d758f88- deployment-39  e3ce4212-eb0b-4ddd-bd72-41199bb3341e 10093 0 2020-10-25 16:38:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ce72da73-8277-42cd-91b8-4e2369215780 0xc004b3da40 0xc004b3da41}] []  [{kube-controller-manager Update v1 2020-10-25 16:38:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ce72da73-8277-42cd-91b8-4e2369215780\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:38:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kn9rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kn9rr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kn9rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-nmms,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.4,PodIP:,StartTime:2020-10-25 16:38:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 25 16:38:25.895: INFO: Pod "webserver-deployment-795d758f88-s87sd" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-s87sd webserver-deployment-795d758f88- deployment-39  88601f4c-ce82-4e5d-8a4d-a7eba2b4e196 10098 0 2020-10-25 16:38:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ce72da73-8277-42cd-91b8-4e2369215780 0xc004b3dbd0 0xc004b3dbd1}] []  [{kube-controller-manager Update v1 2020-10-25 16:38:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ce72da73-8277-42cd-91b8-4e2369215780\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:38:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.1.57\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kn9rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kn9rr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kn9rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-jzdr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:10.64.1.57,StartTime:2020-10-25 16:38:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.1.57,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 25 16:38:25.895: INFO: Pod "webserver-deployment-795d758f88-thxvc" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-thxvc webserver-deployment-795d758f88- deployment-39  3badf25a-4d5a-42a3-a2d9-13d31c187356 10022 0 2020-10-25 16:38:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ce72da73-8277-42cd-91b8-4e2369215780 0xc004b3dd90 0xc004b3dd91}] []  [{kube-controller-manager Update v1 2020-10-25 16:38:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ce72da73-8277-42cd-91b8-4e2369215780\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:38:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.1.56\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kn9rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kn9rr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kn9rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-jzdr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:10.64.1.56,StartTime:2020-10-25 16:38:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.1.56,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 25 16:38:25.896: INFO: Pod "webserver-deployment-795d758f88-v2gz8" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-v2gz8 webserver-deployment-795d758f88- deployment-39  812abff1-ef77-4379-9496-f86001eaf59d 10101 0 2020-10-25 16:38:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ce72da73-8277-42cd-91b8-4e2369215780 0xc004b3df50 0xc004b3df51}] []  [{kube-controller-manager Update v1 2020-10-25 16:38:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ce72da73-8277-42cd-91b8-4e2369215780\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:38:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.2.134\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kn9rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kn9rr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kn9rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-05w9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.5,PodIP:10.64.2.134,StartTime:2020-10-25 16:38:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.2.134,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 25 16:38:25.896: INFO: Pod "webserver-deployment-795d758f88-wngl6" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-wngl6 webserver-deployment-795d758f88- deployment-39  ded78ec6-d074-4ca9-a276-d6f4800da10b 10087 0 2020-10-25 16:38:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ce72da73-8277-42cd-91b8-4e2369215780 0xc0020a8140 0xc0020a8141}] []  [{kube-controller-manager Update v1 2020-10-25 16:38:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ce72da73-8277-42cd-91b8-4e2369215780\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:38:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kn9rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kn9rr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kn9rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-jzdr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:,StartTime:2020-10-25 16:38:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 25 16:38:25.896: INFO: Pod "webserver-deployment-795d758f88-wxvdq" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-wxvdq webserver-deployment-795d758f88- deployment-39  fc7a3014-13b4-48f2-995d-8f86158a15b5 10059 0 2020-10-25 16:38:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ce72da73-8277-42cd-91b8-4e2369215780 0xc0020a82f0 0xc0020a82f1}] []  [{kube-controller-manager Update v1 2020-10-25 16:38:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ce72da73-8277-42cd-91b8-4e2369215780\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:38:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.3.36\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kn9rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kn9rr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kn9rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-nmms,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.4,PodIP:10.64.3.36,StartTime:2020-10-25 16:38:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.3.36,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 25 16:38:25.896: INFO: Pod "webserver-deployment-dd94f59b7-54bf9" is not available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-54bf9 webserver-deployment-dd94f59b7- deployment-39  d6e7fd98-7476-44a0-a0ca-950a4cff4212 10090 0 2020-10-25 16:38:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ef6f3f83-114e-46d9-9084-8383ab050aa5 0xc0020a84c0 0xc0020a84c1}] []  [{kube-controller-manager Update v1 2020-10-25 16:38:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef6f3f83-114e-46d9-9084-8383ab050aa5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:38:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kn9rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kn9rr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kn9rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-nmms,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.4,PodIP:,StartTime:2020-10-25 16:38:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 25 16:38:25.897: INFO: Pod "webserver-deployment-dd94f59b7-5wc59" is not available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-5wc59 webserver-deployment-dd94f59b7- deployment-39  bab1f0e0-a26e-4fd5-8251-09d6deae481b 10089 0 2020-10-25 16:38:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ef6f3f83-114e-46d9-9084-8383ab050aa5 0xc0020a8690 0xc0020a8691}] []  [{kube-controller-manager Update v1 2020-10-25 16:38:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef6f3f83-114e-46d9-9084-8383ab050aa5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:38:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kn9rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kn9rr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kn9rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-jzdr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:,StartTime:2020-10-25 16:38:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 25 16:38:25.897: INFO: Pod "webserver-deployment-dd94f59b7-68vd6" is available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-68vd6 webserver-deployment-dd94f59b7- deployment-39  13df585a-bce9-49ab-8fb3-bae9081d8dfc 9970 0 2020-10-25 16:38:13 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ef6f3f83-114e-46d9-9084-8383ab050aa5 0xc0020a8830 0xc0020a8831}] []  [{kube-controller-manager Update v1 2020-10-25 16:38:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef6f3f83-114e-46d9-9084-8383ab050aa5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:38:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.3.35\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kn9rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kn9rr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kn9rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-nmms,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.4,PodIP:10.64.3.35,StartTime:2020-10-25 16:38:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-25 16:38:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3ca1b6e3bbb880c9fe6f48419c9a01e067aa75ff9d367912a47ef4d36f8130fa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.3.35,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 32 lines ...
Oct 25 16:38:25.901: INFO: Pod "webserver-deployment-dd94f59b7-xpvzk" is available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-xpvzk webserver-deployment-dd94f59b7- deployment-39  9b47d1a5-c366-4ce1-a6c3-2466bc56e9a6 10106 0 2020-10-25 16:38:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ef6f3f83-114e-46d9-9084-8383ab050aa5 0xc006b09920 0xc006b09921}] []  [{kube-controller-manager Update v1 2020-10-25 16:38:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef6f3f83-114e-46d9-9084-8383ab050aa5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 16:38:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.3.38\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kn9rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kn9rr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kn9rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-nmms,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 16:38:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.4,PodIP:10.64.3.38,StartTime:2020-10-25 16:38:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-25 16:38:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0fca72dccc2a47b998c337dab6a0f4119fe3e6adc613bcc5f88b26bf5012d1d6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.3.38,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:38:25.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-39" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":306,"completed":133,"skipped":2218,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 3 lines ...
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 25 16:38:30.708: INFO: Waiting up to 5m0s for pod "client-envvars-e2464e18-de1e-4905-a5de-bfcf3274da4c" in namespace "pods-1610" to be "Succeeded or Failed"
Oct 25 16:38:30.797: INFO: Pod "client-envvars-e2464e18-de1e-4905-a5de-bfcf3274da4c": Phase="Pending", Reason="", readiness=false. Elapsed: 89.012515ms
Oct 25 16:38:32.845: INFO: Pod "client-envvars-e2464e18-de1e-4905-a5de-bfcf3274da4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.136360395s
STEP: Saw pod success
Oct 25 16:38:32.845: INFO: Pod "client-envvars-e2464e18-de1e-4905-a5de-bfcf3274da4c" satisfied condition "Succeeded or Failed"
Oct 25 16:38:32.882: INFO: Trying to get logs from node bootstrap-e2e-minion-group-jzdr pod client-envvars-e2464e18-de1e-4905-a5de-bfcf3274da4c container env3cont: <nil>
STEP: delete the pod
Oct 25 16:38:33.019: INFO: Waiting for pod client-envvars-e2464e18-de1e-4905-a5de-bfcf3274da4c to disappear
Oct 25 16:38:33.090: INFO: Pod client-envvars-e2464e18-de1e-4905-a5de-bfcf3274da4c no longer exists
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:38:33.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1610" for this suite.
•{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":306,"completed":134,"skipped":2236,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 25 16:38:33.204: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct 25 16:38:33.692: INFO: Waiting up to 5m0s for pod "pod-66d987fc-ee39-4e43-9164-aaf29e43e74d" in namespace "emptydir-2730" to be "Succeeded or Failed"
Oct 25 16:38:33.746: INFO: Pod "pod-66d987fc-ee39-4e43-9164-aaf29e43e74d": Phase="Pending", Reason="", readiness=false. Elapsed: 54.051169ms
Oct 25 16:38:35.786: INFO: Pod "pod-66d987fc-ee39-4e43-9164-aaf29e43e74d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.094189666s
STEP: Saw pod success
Oct 25 16:38:35.786: INFO: Pod "pod-66d987fc-ee39-4e43-9164-aaf29e43e74d" satisfied condition "Succeeded or Failed"
Oct 25 16:38:35.823: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-66d987fc-ee39-4e43-9164-aaf29e43e74d container test-container: <nil>
STEP: delete the pod
Oct 25 16:38:36.311: INFO: Waiting for pod pod-66d987fc-ee39-4e43-9164-aaf29e43e74d to disappear
Oct 25 16:38:36.347: INFO: Pod pod-66d987fc-ee39-4e43-9164-aaf29e43e74d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:38:36.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2730" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":135,"skipped":2241,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-aef5b526-1854-4362-b231-af4fc3b3e001
STEP: Creating a pod to test consume configMaps
Oct 25 16:38:36.713: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-33a64fe1-466b-4399-a306-11925e30c087" in namespace "projected-5119" to be "Succeeded or Failed"
Oct 25 16:38:36.751: INFO: Pod "pod-projected-configmaps-33a64fe1-466b-4399-a306-11925e30c087": Phase="Pending", Reason="", readiness=false. Elapsed: 38.29902ms
Oct 25 16:38:38.857: INFO: Pod "pod-projected-configmaps-33a64fe1-466b-4399-a306-11925e30c087": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.14380599s
STEP: Saw pod success
Oct 25 16:38:38.857: INFO: Pod "pod-projected-configmaps-33a64fe1-466b-4399-a306-11925e30c087" satisfied condition "Succeeded or Failed"
Oct 25 16:38:38.950: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-projected-configmaps-33a64fe1-466b-4399-a306-11925e30c087 container agnhost-container: <nil>
STEP: delete the pod
Oct 25 16:38:39.150: INFO: Waiting for pod pod-projected-configmaps-33a64fe1-466b-4399-a306-11925e30c087 to disappear
Oct 25 16:38:39.216: INFO: Pod pod-projected-configmaps-33a64fe1-466b-4399-a306-11925e30c087 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:38:39.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5119" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":306,"completed":136,"skipped":2249,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:38:45.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7477" for this suite.
STEP: Destroying namespace "webhook-7477-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":306,"completed":137,"skipped":2256,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] StatefulSet
... skipping 85 lines ...
Oct 25 16:39:53.005: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Oct 25 16:39:53.006: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Oct 25 16:39:53.006: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Oct 25 16:39:53.006: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:39:53.540: INFO: rc: 1
Oct 25 16:39:53.541: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: Internal error occurred: error executing command in container: failed to exec in container: container is in CONTAINER_EXITED state

error:
exit status 1
Oct 25 16:40:03.541: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:40:03.931: INFO: rc: 1
Oct 25 16:40:03.931: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:40:13.931: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:40:14.160: INFO: rc: 1
Oct 25 16:40:14.160: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:40:24.160: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:40:24.431: INFO: rc: 1
Oct 25 16:40:24.431: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:40:34.431: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:40:34.993: INFO: rc: 1
Oct 25 16:40:34.993: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:40:44.994: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:40:45.250: INFO: rc: 1
Oct 25 16:40:45.250: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:40:55.250: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:40:55.581: INFO: rc: 1
Oct 25 16:40:55.582: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:41:05.582: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:41:05.819: INFO: rc: 1
Oct 25 16:41:05.819: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:41:15.821: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:41:16.075: INFO: rc: 1
Oct 25 16:41:16.075: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:41:26.076: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:41:26.387: INFO: rc: 1
Oct 25 16:41:26.387: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:41:36.387: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:41:36.630: INFO: rc: 1
Oct 25 16:41:36.630: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:41:46.630: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:41:47.086: INFO: rc: 1
Oct 25 16:41:47.086: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:41:57.086: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:41:57.322: INFO: rc: 1
Oct 25 16:41:57.322: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:42:07.322: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:42:07.630: INFO: rc: 1
Oct 25 16:42:07.630: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:42:17.631: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:42:17.856: INFO: rc: 1
Oct 25 16:42:17.856: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:42:27.856: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:42:28.101: INFO: rc: 1
Oct 25 16:42:28.101: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:42:38.102: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:42:38.363: INFO: rc: 1
Oct 25 16:42:38.363: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:42:48.363: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:42:48.831: INFO: rc: 1
Oct 25 16:42:48.831: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:42:58.832: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:42:59.060: INFO: rc: 1
Oct 25 16:42:59.060: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:43:09.061: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:43:09.297: INFO: rc: 1
Oct 25 16:43:09.297: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:43:19.298: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:43:19.556: INFO: rc: 1
Oct 25 16:43:19.557: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:43:29.557: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:43:29.803: INFO: rc: 1
Oct 25 16:43:29.803: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:43:39.803: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:43:40.073: INFO: rc: 1
Oct 25 16:43:40.074: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:43:50.074: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:43:50.306: INFO: rc: 1
Oct 25 16:43:50.306: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:44:00.307: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:44:00.540: INFO: rc: 1
Oct 25 16:44:00.540: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:44:10.540: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:44:10.885: INFO: rc: 1
Oct 25 16:44:10.885: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:44:20.886: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:44:21.127: INFO: rc: 1
Oct 25 16:44:21.127: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:44:31.127: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:44:31.363: INFO: rc: 1
Oct 25 16:44:31.363: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:44:41.363: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:44:41.631: INFO: rc: 1
Oct 25 16:44:41.631: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:44:51.632: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:44:51.878: INFO: rc: 1
Oct 25 16:44:51.878: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 25 16:45:01.878: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4556 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 25 16:45:02.136: INFO: rc: 1
Oct 25 16:45:02.136: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Oct 25 16:45:02.136: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
... skipping 13 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":306,"completed":138,"skipped":2274,"failed":0}
[sig-api-machinery] Discovery 
  should validate PreferredVersion for each APIGroup [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Discovery
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 94 lines ...
Oct 25 16:45:04.378: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}]
Oct 25 16:45:04.378: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1
[AfterEach] [sig-api-machinery] Discovery
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:45:04.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-3989" for this suite.
•{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":306,"completed":139,"skipped":2274,"failed":0}
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 25 lines ...
Oct 25 16:45:21.334: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:45:21.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6683" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":306,"completed":140,"skipped":2280,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 25 16:45:21.680: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0666 on tmpfs
Oct 25 16:45:22.127: INFO: Waiting up to 5m0s for pod "pod-bdc4868e-03b8-4f13-82ff-56e356c9d067" in namespace "emptydir-8749" to be "Succeeded or Failed"
Oct 25 16:45:22.206: INFO: Pod "pod-bdc4868e-03b8-4f13-82ff-56e356c9d067": Phase="Pending", Reason="", readiness=false. Elapsed: 78.86845ms
Oct 25 16:45:24.244: INFO: Pod "pod-bdc4868e-03b8-4f13-82ff-56e356c9d067": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.116364895s
STEP: Saw pod success
Oct 25 16:45:24.244: INFO: Pod "pod-bdc4868e-03b8-4f13-82ff-56e356c9d067" satisfied condition "Succeeded or Failed"
Oct 25 16:45:24.283: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-bdc4868e-03b8-4f13-82ff-56e356c9d067 container test-container: <nil>
STEP: delete the pod
Oct 25 16:45:24.375: INFO: Waiting for pod pod-bdc4868e-03b8-4f13-82ff-56e356c9d067 to disappear
Oct 25 16:45:24.412: INFO: Pod pod-bdc4868e-03b8-4f13-82ff-56e356c9d067 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:45:24.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8749" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":141,"skipped":2296,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Oct 25 16:45:37.038: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:37.077: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:37.213: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:37.261: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:37.308: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:37.350: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:37.428: INFO: Lookups using dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5999.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5999.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local jessie_udp@dns-test-service-2.dns-5999.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5999.svc.cluster.local]

Oct 25 16:45:42.467: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:42.505: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:42.544: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:42.582: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:42.697: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:42.745: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:42.799: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:42.844: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:42.924: INFO: Lookups using dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5999.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5999.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local jessie_udp@dns-test-service-2.dns-5999.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5999.svc.cluster.local]

Oct 25 16:45:47.552: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:47.592: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:47.631: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:47.670: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:47.791: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:47.829: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:47.867: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:47.905: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:47.985: INFO: Lookups using dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5999.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5999.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local jessie_udp@dns-test-service-2.dns-5999.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5999.svc.cluster.local]

Oct 25 16:45:52.466: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:52.507: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:52.546: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:52.585: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:52.711: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:52.749: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:52.789: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:52.827: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:52.906: INFO: Lookups using dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5999.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5999.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local jessie_udp@dns-test-service-2.dns-5999.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5999.svc.cluster.local]

Oct 25 16:45:57.560: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:57.630: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:58.038: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5999.svc.cluster.local from pod dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d: the server could not find the requested resource (get pods dns-test-904978fd-3474-4c7e-83ae-9b744847482d)
Oct 25 16:45:58.367: INFO: Lookups using dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d failed for: [wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5999.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5999.svc.cluster.local jessie_udp@dns-test-service-2.dns-5999.svc.cluster.local]

Oct 25 16:46:03.000: INFO: DNS probes using dns-5999/dns-test-904978fd-3474-4c7e-83ae-9b744847482d succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:46:03.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5999" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":306,"completed":142,"skipped":2307,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] 
  removing taint cancels eviction [Disruptive] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 20 lines ...
STEP: Waiting some time to make sure that toleration time passed.
Oct 25 16:48:19.403: INFO: Pod wasn't evicted. Test successful
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:48:19.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-single-pod-4936" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":306,"completed":143,"skipped":2315,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 50 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:48:21.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7222" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":306,"completed":144,"skipped":2358,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:48:28.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-8438" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":306,"completed":145,"skipped":2370,"failed":0}
SSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] 
  validates basic preemption works [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 17 lines ...
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:49:53.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-2393" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":306,"completed":146,"skipped":2374,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Oct 25 16:49:57.257: INFO: Successfully updated pod "annotationupdatee6b129a5-0eca-479c-9be4-b6f5c77032a8"
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:50:01.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-685" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":306,"completed":147,"skipped":2424,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-map-9eda1fa3-1f8e-49aa-9c1d-c84f0dac04ec
STEP: Creating a pod to test consume configMaps
Oct 25 16:50:01.926: INFO: Waiting up to 5m0s for pod "pod-configmaps-9ca1f273-083d-4960-99e2-42aa2b815276" in namespace "configmap-473" to be "Succeeded or Failed"
Oct 25 16:50:01.988: INFO: Pod "pod-configmaps-9ca1f273-083d-4960-99e2-42aa2b815276": Phase="Pending", Reason="", readiness=false. Elapsed: 61.413305ms
Oct 25 16:50:04.026: INFO: Pod "pod-configmaps-9ca1f273-083d-4960-99e2-42aa2b815276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.099782546s
STEP: Saw pod success
Oct 25 16:50:04.026: INFO: Pod "pod-configmaps-9ca1f273-083d-4960-99e2-42aa2b815276" satisfied condition "Succeeded or Failed"
Oct 25 16:50:04.065: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-configmaps-9ca1f273-083d-4960-99e2-42aa2b815276 container configmap-volume-test: <nil>
STEP: delete the pod
Oct 25 16:50:04.161: INFO: Waiting for pod pod-configmaps-9ca1f273-083d-4960-99e2-42aa2b815276 to disappear
Oct 25 16:50:04.203: INFO: Pod pod-configmaps-9ca1f273-083d-4960-99e2-42aa2b815276 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:50:04.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-473" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":306,"completed":148,"skipped":2447,"failed":0}

------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Oct 25 16:50:06.042: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:50:06.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4635" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":306,"completed":149,"skipped":2447,"failed":0}

------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] StatefulSet
... skipping 12 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-2014
STEP: Creating statefulset with conflicting port in namespace statefulset-2014
STEP: Waiting until pod test-pod will start running in namespace statefulset-2014
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2014
Oct 25 16:50:09.225: INFO: Observed stateful pod in namespace: statefulset-2014, name: ss-0, uid: 359f237d-1988-4f38-80ba-c308149db442, status phase: Pending. Waiting for statefulset controller to delete.
Oct 25 16:50:09.646: INFO: Observed stateful pod in namespace: statefulset-2014, name: ss-0, uid: 359f237d-1988-4f38-80ba-c308149db442, status phase: Failed. Waiting for statefulset controller to delete.
Oct 25 16:50:09.669: INFO: Observed stateful pod in namespace: statefulset-2014, name: ss-0, uid: 359f237d-1988-4f38-80ba-c308149db442, status phase: Failed. Waiting for statefulset controller to delete.
Oct 25 16:50:09.681: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2014
STEP: Removing pod with conflicting port in namespace statefulset-2014
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2014 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114
Oct 25 16:50:13.919: INFO: Deleting all statefulset in ns statefulset-2014
Oct 25 16:50:14.007: INFO: Scaling statefulset ss to 0
Oct 25 16:50:24.436: INFO: Waiting for statefulset status.replicas updated to 0
Oct 25 16:50:24.474: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:50:24.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2014" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":306,"completed":150,"skipped":2447,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 26 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:50:32.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2465" for this suite.
STEP: Destroying namespace "webhook-2465-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":306,"completed":151,"skipped":2451,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicationController
... skipping 12 lines ...
Oct 25 16:50:40.064: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:50:40.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3907" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":306,"completed":152,"skipped":2463,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Oct 25 16:50:48.932: INFO: stderr: ""
Oct 25 16:50:48.933: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2551-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:50:54.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6333" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":306,"completed":153,"skipped":2470,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 25 16:50:54.855: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct 25 16:50:55.141: INFO: Waiting up to 5m0s for pod "pod-5464b0cd-12c8-453a-8df2-c5f054de0891" in namespace "emptydir-1279" to be "Succeeded or Failed"
Oct 25 16:50:55.177: INFO: Pod "pod-5464b0cd-12c8-453a-8df2-c5f054de0891": Phase="Pending", Reason="", readiness=false. Elapsed: 36.543364ms
Oct 25 16:50:57.215: INFO: Pod "pod-5464b0cd-12c8-453a-8df2-c5f054de0891": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.074188936s
STEP: Saw pod success
Oct 25 16:50:57.215: INFO: Pod "pod-5464b0cd-12c8-453a-8df2-c5f054de0891" satisfied condition "Succeeded or Failed"
Oct 25 16:50:57.251: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-5464b0cd-12c8-453a-8df2-c5f054de0891 container test-container: <nil>
STEP: delete the pod
Oct 25 16:50:57.349: INFO: Waiting for pod pod-5464b0cd-12c8-453a-8df2-c5f054de0891 to disappear
Oct 25 16:50:57.385: INFO: Pod pod-5464b0cd-12c8-453a-8df2-c5f054de0891 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:50:57.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1279" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":154,"skipped":2473,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-instrumentation] Events API 
  should delete a collection of events [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-instrumentation] Events API
... skipping 12 lines ...
Oct 25 16:50:57.871: INFO: requesting DeleteCollection of events
STEP: check that the list of events matches the requested quantity
[AfterEach] [sig-instrumentation] Events API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:50:57.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-5587" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":306,"completed":155,"skipped":2508,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 44 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:51:21.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2788" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":306,"completed":156,"skipped":2513,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 15 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:51:27.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4281" for this suite.
STEP: Destroying namespace "nsdeletetest-647" for this suite.
Oct 25 16:51:27.997: INFO: Namespace nsdeletetest-647 was already deleted
STEP: Destroying namespace "nsdeletetest-7028" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":306,"completed":157,"skipped":2532,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 25 16:51:28.036: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on node default medium
Oct 25 16:51:28.260: INFO: Waiting up to 5m0s for pod "pod-ea9c37f1-1f75-490f-9e5a-83a92a068b33" in namespace "emptydir-6347" to be "Succeeded or Failed"
Oct 25 16:51:28.296: INFO: Pod "pod-ea9c37f1-1f75-490f-9e5a-83a92a068b33": Phase="Pending", Reason="", readiness=false. Elapsed: 36.097628ms
Oct 25 16:51:30.333: INFO: Pod "pod-ea9c37f1-1f75-490f-9e5a-83a92a068b33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.073234125s
STEP: Saw pod success
Oct 25 16:51:30.333: INFO: Pod "pod-ea9c37f1-1f75-490f-9e5a-83a92a068b33" satisfied condition "Succeeded or Failed"
Oct 25 16:51:30.371: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-ea9c37f1-1f75-490f-9e5a-83a92a068b33 container test-container: <nil>
STEP: delete the pod
Oct 25 16:51:30.460: INFO: Waiting for pod pod-ea9c37f1-1f75-490f-9e5a-83a92a068b33 to disappear
Oct 25 16:51:30.497: INFO: Pod pod-ea9c37f1-1f75-490f-9e5a-83a92a068b33 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:51:30.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6347" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":158,"skipped":2566,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: reading a file in the container
Oct 25 16:51:34.517: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-3928 pod-service-account-b3f0c869-c75a-417f-8948-7a0bd074f747 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:51:35.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3928" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":306,"completed":159,"skipped":2593,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Oct 25 16:51:40.124: INFO: Successfully updated pod "labelsupdate9bf59a4e-5eb7-40d1-9ce8-894d2c5bdae0"
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:51:42.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9390" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":306,"completed":160,"skipped":2598,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-map-6152b2ed-00ae-4704-a055-d06df2d00f9b
STEP: Creating a pod to test consume secrets
Oct 25 16:51:42.566: INFO: Waiting up to 5m0s for pod "pod-secrets-32f1d690-7169-47a5-9751-ed012b63562e" in namespace "secrets-2467" to be "Succeeded or Failed"
Oct 25 16:51:42.603: INFO: Pod "pod-secrets-32f1d690-7169-47a5-9751-ed012b63562e": Phase="Pending", Reason="", readiness=false. Elapsed: 36.653721ms
Oct 25 16:51:44.677: INFO: Pod "pod-secrets-32f1d690-7169-47a5-9751-ed012b63562e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.110322358s
STEP: Saw pod success
Oct 25 16:51:44.677: INFO: Pod "pod-secrets-32f1d690-7169-47a5-9751-ed012b63562e" satisfied condition "Succeeded or Failed"
Oct 25 16:51:44.739: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-secrets-32f1d690-7169-47a5-9751-ed012b63562e container secret-volume-test: <nil>
STEP: delete the pod
Oct 25 16:51:44.903: INFO: Waiting for pod pod-secrets-32f1d690-7169-47a5-9751-ed012b63562e to disappear
Oct 25 16:51:44.967: INFO: Pod pod-secrets-32f1d690-7169-47a5-9751-ed012b63562e no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:51:44.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2467" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":161,"skipped":2620,"failed":0}
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-downwardapi-rmqg
STEP: Creating a pod to test atomic-volume-subpath
Oct 25 16:51:45.647: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rmqg" in namespace "subpath-9289" to be "Succeeded or Failed"
Oct 25 16:51:45.701: INFO: Pod "pod-subpath-test-downwardapi-rmqg": Phase="Pending", Reason="", readiness=false. Elapsed: 54.20094ms
Oct 25 16:51:47.738: INFO: Pod "pod-subpath-test-downwardapi-rmqg": Phase="Running", Reason="", readiness=true. Elapsed: 2.09120623s
Oct 25 16:51:49.794: INFO: Pod "pod-subpath-test-downwardapi-rmqg": Phase="Running", Reason="", readiness=true. Elapsed: 4.147039872s
Oct 25 16:51:51.836: INFO: Pod "pod-subpath-test-downwardapi-rmqg": Phase="Running", Reason="", readiness=true. Elapsed: 6.189062086s
Oct 25 16:51:54.028: INFO: Pod "pod-subpath-test-downwardapi-rmqg": Phase="Running", Reason="", readiness=true. Elapsed: 8.381787925s
Oct 25 16:51:56.066: INFO: Pod "pod-subpath-test-downwardapi-rmqg": Phase="Running", Reason="", readiness=true. Elapsed: 10.419411861s
Oct 25 16:51:58.105: INFO: Pod "pod-subpath-test-downwardapi-rmqg": Phase="Running", Reason="", readiness=true. Elapsed: 12.458171623s
Oct 25 16:52:00.168: INFO: Pod "pod-subpath-test-downwardapi-rmqg": Phase="Running", Reason="", readiness=true. Elapsed: 14.52158241s
Oct 25 16:52:02.204: INFO: Pod "pod-subpath-test-downwardapi-rmqg": Phase="Running", Reason="", readiness=true. Elapsed: 16.557778011s
Oct 25 16:52:04.241: INFO: Pod "pod-subpath-test-downwardapi-rmqg": Phase="Running", Reason="", readiness=true. Elapsed: 18.594925759s
Oct 25 16:52:06.290: INFO: Pod "pod-subpath-test-downwardapi-rmqg": Phase="Running", Reason="", readiness=true. Elapsed: 20.643570045s
Oct 25 16:52:08.327: INFO: Pod "pod-subpath-test-downwardapi-rmqg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.680565877s
STEP: Saw pod success
Oct 25 16:52:08.327: INFO: Pod "pod-subpath-test-downwardapi-rmqg" satisfied condition "Succeeded or Failed"
Oct 25 16:52:08.364: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-subpath-test-downwardapi-rmqg container test-container-subpath-downwardapi-rmqg: <nil>
STEP: delete the pod
Oct 25 16:52:08.450: INFO: Waiting for pod pod-subpath-test-downwardapi-rmqg to disappear
Oct 25 16:52:08.486: INFO: Pod pod-subpath-test-downwardapi-rmqg no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-rmqg
Oct 25 16:52:08.486: INFO: Deleting pod "pod-subpath-test-downwardapi-rmqg" in namespace "subpath-9289"
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:52:08.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9289" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":306,"completed":162,"skipped":2625,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] StatefulSet
... skipping 137 lines ...
Oct 25 16:53:04.793: INFO: Waiting for statefulset status.replicas updated to 0
Oct 25 16:53:04.844: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:53:05.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6087" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":306,"completed":163,"skipped":2653,"failed":0}
SS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Watchers
... skipping 13 lines ...
Oct 25 16:53:06.022: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-1404  4ba26b02-53ce-4773-8276-736bf0398030 13007 0 2020-10-25 16:53:05 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-10-25 16:53:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Oct 25 16:53:06.022: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-1404  4ba26b02-53ce-4773-8276-736bf0398030 13008 0 2020-10-25 16:53:05 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-10-25 16:53:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:53:06.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1404" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":306,"completed":164,"skipped":2655,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 47 lines ...
Oct 25 16:53:10.725: INFO: stderr: ""
Oct 25 16:53:10.725: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:53:10.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9973" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":306,"completed":165,"skipped":2737,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 27 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:53:26.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6957" for this suite.
STEP: Destroying namespace "webhook-6957-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":306,"completed":166,"skipped":2739,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 11 lines ...
STEP: creating replication controller affinity-clusterip-transition in namespace services-5935
I1025 16:53:26.824403  144261 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-5935, replica count: 3
I1025 16:53:29.924767  144261 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 25 16:53:29.998: INFO: Creating new exec pod
Oct 25 16:53:33.162: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-5935 exec execpod-affinitygknc2 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80'
Oct 25 16:53:35.654: INFO: rc: 1
Oct 25 16:53:35.655: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-5935 exec execpod-affinitygknc2 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-transition 80
nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 25 16:53:36.655: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-5935 exec execpod-affinitygknc2 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80'
Oct 25 16:53:37.149: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n"
Oct 25 16:53:37.149: INFO: stdout: ""
Oct 25 16:53:37.149: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-5935 exec execpod-affinitygknc2 -- /bin/sh -x -c nc -zv -t -w 2 10.0.152.140 80'
... skipping 63 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:55:20.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5935" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":306,"completed":167,"skipped":2742,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 25 16:55:20.960: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0666 on tmpfs
Oct 25 16:55:21.216: INFO: Waiting up to 5m0s for pod "pod-02a47c9c-44b9-4fce-8b7e-3061cf65242f" in namespace "emptydir-3091" to be "Succeeded or Failed"
Oct 25 16:55:21.288: INFO: Pod "pod-02a47c9c-44b9-4fce-8b7e-3061cf65242f": Phase="Pending", Reason="", readiness=false. Elapsed: 71.335539ms
Oct 25 16:55:23.325: INFO: Pod "pod-02a47c9c-44b9-4fce-8b7e-3061cf65242f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.109009108s
STEP: Saw pod success
Oct 25 16:55:23.325: INFO: Pod "pod-02a47c9c-44b9-4fce-8b7e-3061cf65242f" satisfied condition "Succeeded or Failed"
Oct 25 16:55:23.362: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-02a47c9c-44b9-4fce-8b7e-3061cf65242f container test-container: <nil>
STEP: delete the pod
Oct 25 16:55:23.461: INFO: Waiting for pod pod-02a47c9c-44b9-4fce-8b7e-3061cf65242f to disappear
Oct 25 16:55:23.498: INFO: Pod pod-02a47c9c-44b9-4fce-8b7e-3061cf65242f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:55:23.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3091" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":168,"skipped":2770,"failed":0}

------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:55:35.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3175" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":306,"completed":169,"skipped":2770,"failed":0}
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should have an terminated reason [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:55:39.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-97" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":306,"completed":170,"skipped":2772,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 25 lines ...
Oct 25 16:55:43.004: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:43.043: INFO: Unable to read jessie_udp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:43.086: INFO: Unable to read jessie_tcp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:43.126: INFO: Unable to read jessie_udp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:43.167: INFO: Unable to read jessie_tcp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:43.209: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:43.499: INFO: Lookups using dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2851 wheezy_tcp@dns-test-service.dns-2851 wheezy_udp@dns-test-service.dns-2851.svc wheezy_tcp@dns-test-service.dns-2851.svc wheezy_udp@_http._tcp.dns-test-service.dns-2851.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2851.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2851 jessie_tcp@dns-test-service.dns-2851 jessie_udp@dns-test-service.dns-2851.svc jessie_tcp@dns-test-service.dns-2851.svc jessie_udp@_http._tcp.dns-test-service.dns-2851.svc]

Oct 25 16:55:48.539: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:48.680: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:48.736: INFO: Unable to read wheezy_udp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:48.781: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:48.843: INFO: Unable to read wheezy_udp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:48.881: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:49.232: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:49.272: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:49.310: INFO: Unable to read jessie_udp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:49.348: INFO: Unable to read jessie_tcp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:49.388: INFO: Unable to read jessie_udp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:49.430: INFO: Unable to read jessie_tcp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:49.856: INFO: Lookups using dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2851 wheezy_tcp@dns-test-service.dns-2851 wheezy_udp@dns-test-service.dns-2851.svc wheezy_tcp@dns-test-service.dns-2851.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2851 jessie_tcp@dns-test-service.dns-2851 jessie_udp@dns-test-service.dns-2851.svc jessie_tcp@dns-test-service.dns-2851.svc]

Oct 25 16:55:53.552: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:53.593: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:53.635: INFO: Unable to read wheezy_udp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:53.681: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:53.731: INFO: Unable to read wheezy_udp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:53.781: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:54.195: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:54.237: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:54.280: INFO: Unable to read jessie_udp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:54.324: INFO: Unable to read jessie_tcp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:54.363: INFO: Unable to read jessie_udp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:54.401: INFO: Unable to read jessie_tcp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:54.718: INFO: Lookups using dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2851 wheezy_tcp@dns-test-service.dns-2851 wheezy_udp@dns-test-service.dns-2851.svc wheezy_tcp@dns-test-service.dns-2851.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2851 jessie_tcp@dns-test-service.dns-2851 jessie_udp@dns-test-service.dns-2851.svc jessie_tcp@dns-test-service.dns-2851.svc]

Oct 25 16:55:58.537: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:58.575: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:58.613: INFO: Unable to read wheezy_udp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:58.651: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:58.690: INFO: Unable to read wheezy_udp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:58.727: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:59.097: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:59.146: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:59.185: INFO: Unable to read jessie_udp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:59.223: INFO: Unable to read jessie_tcp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:59.262: INFO: Unable to read jessie_udp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:59.300: INFO: Unable to read jessie_tcp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:55:59.611: INFO: Lookups using dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2851 wheezy_tcp@dns-test-service.dns-2851 wheezy_udp@dns-test-service.dns-2851.svc wheezy_tcp@dns-test-service.dns-2851.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2851 jessie_tcp@dns-test-service.dns-2851 jessie_udp@dns-test-service.dns-2851.svc jessie_tcp@dns-test-service.dns-2851.svc]

Oct 25 16:56:03.554: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:03.747: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:03.953: INFO: Unable to read wheezy_udp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:04.151: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:04.252: INFO: Unable to read wheezy_udp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:04.306: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:04.913: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:04.956: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:05.011: INFO: Unable to read jessie_udp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:05.075: INFO: Unable to read jessie_tcp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:05.126: INFO: Unable to read jessie_udp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:05.212: INFO: Unable to read jessie_tcp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:05.569: INFO: Lookups using dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2851 wheezy_tcp@dns-test-service.dns-2851 wheezy_udp@dns-test-service.dns-2851.svc wheezy_tcp@dns-test-service.dns-2851.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2851 jessie_tcp@dns-test-service.dns-2851 jessie_udp@dns-test-service.dns-2851.svc jessie_tcp@dns-test-service.dns-2851.svc]

Oct 25 16:56:08.538: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:08.575: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:08.614: INFO: Unable to read wheezy_udp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:08.653: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:08.690: INFO: Unable to read wheezy_udp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:08.729: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:09.080: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:09.118: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:09.157: INFO: Unable to read jessie_udp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:09.195: INFO: Unable to read jessie_tcp@dns-test-service.dns-2851 from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:09.234: INFO: Unable to read jessie_udp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:09.296: INFO: Unable to read jessie_tcp@dns-test-service.dns-2851.svc from pod dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19: the server could not find the requested resource (get pods dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19)
Oct 25 16:56:09.641: INFO: Lookups using dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2851 wheezy_tcp@dns-test-service.dns-2851 wheezy_udp@dns-test-service.dns-2851.svc wheezy_tcp@dns-test-service.dns-2851.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2851 jessie_tcp@dns-test-service.dns-2851 jessie_udp@dns-test-service.dns-2851.svc jessie_tcp@dns-test-service.dns-2851.svc]

Oct 25 16:56:14.592: INFO: DNS probes using dns-2851/dns-test-a12991ea-587a-4070-81cb-1ed1f4271f19 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:56:14.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2851" for this suite.
•{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":306,"completed":171,"skipped":2903,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 25 16:56:14.892: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct 25 16:56:15.119: INFO: Waiting up to 5m0s for pod "pod-1e09d2db-c22f-4e1a-87d8-fb6c79d9b298" in namespace "emptydir-8803" to be "Succeeded or Failed"
Oct 25 16:56:15.157: INFO: Pod "pod-1e09d2db-c22f-4e1a-87d8-fb6c79d9b298": Phase="Pending", Reason="", readiness=false. Elapsed: 38.213461ms
Oct 25 16:56:17.194: INFO: Pod "pod-1e09d2db-c22f-4e1a-87d8-fb6c79d9b298": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.07551075s
STEP: Saw pod success
Oct 25 16:56:17.194: INFO: Pod "pod-1e09d2db-c22f-4e1a-87d8-fb6c79d9b298" satisfied condition "Succeeded or Failed"
Oct 25 16:56:17.231: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-1e09d2db-c22f-4e1a-87d8-fb6c79d9b298 container test-container: <nil>
STEP: delete the pod
Oct 25 16:56:17.352: INFO: Waiting for pod pod-1e09d2db-c22f-4e1a-87d8-fb6c79d9b298 to disappear
Oct 25 16:56:17.389: INFO: Pod pod-1e09d2db-c22f-4e1a-87d8-fb6c79d9b298 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:56:17.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8803" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":172,"skipped":2930,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-9404ca3d-28b0-4ac8-b05b-bf4bb7bebf86
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:57:45.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2952" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":173,"skipped":2941,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 15 lines ...
STEP: deleting the pod gracefully
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:58:00.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7007" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":306,"completed":174,"skipped":2965,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Oct 25 16:58:01.182: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 25 16:58:06.742: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:58:26.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5670" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":306,"completed":175,"skipped":2969,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl server-side dry-run 
  should check if kubectl can dry-run update Pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 13 lines ...
STEP: replace the image in the pod with server-side dry-run
Oct 25 16:58:26.659: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=kubectl-4807 get pod e2e-test-httpd-pod -o json'
Oct 25 16:58:26.894: INFO: stderr: ""
Oct 25 16:58:26.894: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-10-25T16:58:26Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"managedFields\": [\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:labels\": {\n                            \".\": {},\n                            \"f:run\": {}\n                        }\n                    },\n                    \"f:spec\": {\n                        \"f:containers\": {\n                            \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n                                \".\": {},\n                                \"f:image\": {},\n                                \"f:imagePullPolicy\": {},\n                                \"f:name\": {},\n                                \"f:resources\": {},\n                                \"f:terminationMessagePath\": {},\n                                \"f:terminationMessagePolicy\": {}\n                            }\n                        },\n                        \"f:dnsPolicy\": {},\n                        \"f:enableServiceLinks\": {},\n                        \"f:restartPolicy\": {},\n                        \"f:schedulerName\": {},\n                        \"f:securityContext\": {},\n                        \"f:terminationGracePeriodSeconds\": {}\n                    }\n                },\n                \"manager\": \"kubectl-run\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-10-25T16:58:26Z\"\n            }\n        ],\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-4807\",\n        \"resourceVersion\": \"13986\",\n        \"uid\": \"fee21c40-5240-4533-bffe-9e70646e550c\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-ss689\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"bootstrap-e2e-minion-group-05w9\",\n        \"preemptionPolicy\": \"PreemptLowerPriority\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-ss689\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-ss689\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-10-25T16:58:26Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"phase\": \"Pending\",\n        \"qosClass\": \"BestEffort\"\n    }\n}\n"
Oct 25 16:58:26.895: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=kubectl-4807 replace -f - --dry-run=server'
Oct 25 16:58:27.559: INFO: rc: 1
Oct 25 16:58:27.560: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=kubectl-4807 replace -f - --dry-run=server:\nCommand stdout:\n\nstderr:\nError from server (Conflict): error when replacing \"STDIN\": Operation cannot be fulfilled on pods \"e2e-test-httpd-pod\": the object has been modified; please apply your changes to the latest version and try again\n\nerror:\nexit status 1",
        },
        Code: 1,
    }
    error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=kubectl-4807 replace -f - --dry-run=server:
    Command stdout:
    
    stderr:
    Error from server (Conflict): error when replacing "STDIN": Operation cannot be fulfilled on pods "e2e-test-httpd-pod": the object has been modified; please apply your changes to the latest version and try again
    
    error:
    exit status 1
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.KubectlBuilder.ExecOrDie(0xc002ea5340, 0x0, 0xc007d663c0, 0xc, 0x4, 0xc004a81f80)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:598 +0xbf
... skipping 16 lines ...
Oct 25 16:58:27.596: INFO: At 2020-10-25 16:58:26 +0000 UTC - event for e2e-test-httpd-pod: {default-scheduler } Scheduled: Successfully assigned kubectl-4807/e2e-test-httpd-pod to bootstrap-e2e-minion-group-05w9
Oct 25 16:58:27.633: INFO: POD                 NODE                             PHASE    GRACE  CONDITIONS
Oct 25 16:58:27.633: INFO: e2e-test-httpd-pod  bootstrap-e2e-minion-group-05w9  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-25 16:58:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-25 16:58:26 +0000 UTC ContainersNotReady containers with unready status: [e2e-test-httpd-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-25 16:58:26 +0000 UTC ContainersNotReady containers with unready status: [e2e-test-httpd-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-25 16:58:26 +0000 UTC  }]
Oct 25 16:58:27.633: INFO: 
Oct 25 16:58:27.671: INFO: 
Logging node info for node bootstrap-e2e-master
Oct 25 16:58:27.710: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master    eb7aedbd-9c39-427f-a707-980f10c2e2b6 13364 0 2020-10-25 16:09:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2020-10-25 16:09:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/instance-type":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:config":{},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kube-controller-manager Update v1 2020-10-25 16:09:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}}}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gci-gce-ingress1-5/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3866808320 0} {<nil>} 3776180Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3604664320 0} {<nil>} 3520180Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-10-25 16:09:26 +0000 UTC,LastTransitionTime:2020-10-25 16:09:26 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-10-25 16:54:44 +0000 UTC,LastTransitionTime:2020-10-25 16:09:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-10-25 16:54:44 +0000 UTC,LastTransitionTime:2020-10-25 16:09:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-10-25 16:54:44 +0000 UTC,LastTransitionTime:2020-10-25 16:09:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-10-25 16:54:44 +0000 UTC,LastTransitionTime:2020-10-25 16:09:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.247.44.183,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-gci-gce-ingress1-5.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-gci-gce-ingress1-5.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:82f263e47bf24642066ee88c08de0b99,SystemUUID:82f263e4-7bf2-4642-066e-e88c08de0b99,BootID:f51f2a5e-640e-4df6-8861-2cdfe4e7d897,KernelVersion:5.4.49+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.1,KubeletVersion:v1.20.0-alpha.3.114+5935fcd704fe89,KubeProxyVersion:v1.20.0-alpha.3.114+5935fcd704fe89,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.20.0-alpha.3.114_5935fcd704fe89],SizeBytes:171109681,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.20.0-alpha.3.114_5935fcd704fe89],SizeBytes:162053965,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.20.0-alpha.3.114_5935fcd704fe89],SizeBytes:69550394,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager@sha256:c0ed56727cd78700034f2f863d774412c78681fb6535456f5e5c420f4248c5a1 k8s.gcr.io/kube-addon-manager:v9.1.1],SizeBytes:30515541,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-glbc-amd64@sha256:1a859d138b4874642e9a8709e7ab04324669c77742349b5b21b1ef8a25fef55f k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1],SizeBytes:26526716,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:9515805,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Oct 25 16:58:27.711: INFO: 
Logging kubelet events for node bootstrap-e2e-master
Oct 25 16:58:27.751: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-master
Oct 25 16:58:27.800: INFO: etcd-server-bootstrap-e2e-master started at 2020-10-25 16:07:37 +0000 UTC (0+1 container statuses recorded)
Oct 25 16:58:27.800: INFO: 	Container etcd-container ready: true, restart count 0
... skipping 14 lines ...
Oct 25 16:58:27.800: INFO: 	Container etcd-container ready: true, restart count 0
W1025 16:58:27.843116  144261 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 25 16:58:28.009: INFO: 
Latency metrics for node bootstrap-e2e-master
Oct 25 16:58:28.009: INFO: 
Logging node info for node bootstrap-e2e-minion-group-05w9
Oct 25 16:58:28.046: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-05w9    e20536ce-8eb7-44f0-8923-b6a8e776648d 13345 0 2020-10-25 16:09:11 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-05w9 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{node-problem-detector Update v1 2020-10-25 16:09:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2020-10-25 16:09:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}}} {e2e.test Update v1 2020-10-25 16:49:30 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2020-10-25 16:49:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/instance-type":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:config":{},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gci-gce-ingress1-5/us-west1-b/bootstrap-e2e-minion-group-05w9,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7823925248 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {<nil>} 3 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7561781248 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {<nil>} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-10-25 16:54:19 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-10-25 16:54:19 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-10-25 16:54:19 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-10-25 16:54:19 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-10-25 16:54:19 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-10-25 16:54:19 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-10-25 16:54:19 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-10-25 16:09:26 +0000 UTC,LastTransitionTime:2020-10-25 16:09:26 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-10-25 16:54:36 +0000 UTC,LastTransitionTime:2020-10-25 16:09:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-10-25 16:54:36 +0000 UTC,LastTransitionTime:2020-10-25 16:09:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-10-25 16:54:36 +0000 UTC,LastTransitionTime:2020-10-25 16:09:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-10-25 16:54:36 +0000 UTC,LastTransitionTime:2020-10-25 16:09:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.105.92.64,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-05w9.c.k8s-gci-gce-ingress1-5.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-05w9.c.k8s-gci-gce-ingress1-5.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6feaf92e5e707d51bc2ccc16c50b9bc0,SystemUUID:6feaf92e-5e70-7d51-bc2c-cc16c50b9bc0,BootID:97d1d4d0-64f3-4c16-8202-410627fcf9ab,KernelVersion:5.4.49+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.1,KubeletVersion:v1.20.0-alpha.3.114+5935fcd704fe89,KubeProxyVersion:v1.20.0-alpha.3.114+5935fcd704fe89,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.20.0-alpha.3.114_5935fcd704fe89],SizeBytes:140129137,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[docker.io/library/nginx@sha256:ed7f815851b5299f616220a63edac69a4cc200e7f536a56e421988da82e44ed8 docker.io/library/nginx:latest],SizeBytes:53593938,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:13982350,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6],SizeBytes:10542830,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:9515805,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:35745de3c9a2884d53ad0e81b39f1eed9a7c77f5f909b9e84f9712b37ffb3021 k8s.gcr.io/addon-resizer:1.8.11],SizeBytes:9347950,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64@sha256:d83d8a481145d0eb71f8bd71ae236d1c6a931dd3bdcaf80919a8ec4a4d8aff74 k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0],SizeBytes:6362391,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Oct 25 16:58:28.047: INFO: 
Logging kubelet events for node bootstrap-e2e-minion-group-05w9
Oct 25 16:58:28.086: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-05w9
Oct 25 16:58:28.129: INFO: e2e-test-httpd-pod started at 2020-10-25 16:58:26 +0000 UTC (0+1 container statuses recorded)
Oct 25 16:58:28.129: INFO: 	Container e2e-test-httpd-pod ready: true, restart count 0
... skipping 4 lines ...
Oct 25 16:58:28.129: INFO: 	Container prometheus-to-sd-exporter ready: true, restart count 0
W1025 16:58:28.172255  144261 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 25 16:58:28.307: INFO: 
Latency metrics for node bootstrap-e2e-minion-group-05w9
Oct 25 16:58:28.307: INFO: 
Logging node info for node bootstrap-e2e-minion-group-jzdr
Oct 25 16:58:28.345: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-jzdr    15416c13-03dd-4fef-a049-fddf28605158 13344 0 2020-10-25 16:09:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-jzdr kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{node-problem-detector Update v1 2020-10-25 16:09:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2020-10-25 16:09:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}}} {e2e.test Update v1 2020-10-25 16:49:30 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2020-10-25 16:49:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/instance-type":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:config":{},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gci-gce-ingress1-5/us-west1-b/bootstrap-e2e-minion-group-jzdr,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7823925248 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {<nil>} 3 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7561781248 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {<nil>} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-10-25 16:54:17 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-10-25 16:54:17 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-10-25 16:54:17 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-10-25 16:54:17 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-10-25 16:54:17 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-10-25 16:54:17 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-10-25 16:54:17 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-10-25 16:09:26 +0000 UTC,LastTransitionTime:2020-10-25 16:09:26 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-10-25 16:54:36 +0000 UTC,LastTransitionTime:2020-10-25 16:09:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-10-25 16:54:36 +0000 UTC,LastTransitionTime:2020-10-25 16:09:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-10-25 16:54:36 +0000 UTC,LastTransitionTime:2020-10-25 16:09:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-10-25 16:54:36 +0000 UTC,LastTransitionTime:2020-10-25 16:09:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.230.111.216,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-jzdr.c.k8s-gci-gce-ingress1-5.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-jzdr.c.k8s-gci-gce-ingress1-5.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ab128703dbbabfe45ce34e0fa9d31516,SystemUUID:ab128703-dbba-bfe4-5ce3-4e0fa9d31516,BootID:196850a3-dddf-48d8-98f3-67a5c77423c6,KernelVersion:5.4.49+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.1,KubeletVersion:v1.20.0-alpha.3.114+5935fcd704fe89,KubeProxyVersion:v1.20.0-alpha.3.114+5935fcd704fe89,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.20.0-alpha.3.114_5935fcd704fe89],SizeBytes:140129137,},ContainerImage{Names:[docker.io/library/nginx@sha256:ed7f815851b5299f616220a63edac69a4cc200e7f536a56e421988da82e44ed8 docker.io/library/nginx:latest],SizeBytes:53593938,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:13982350,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6],SizeBytes:10542830,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:9515805,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:35745de3c9a2884d53ad0e81b39f1eed9a7c77f5f909b9e84f9712b37ffb3021 k8s.gcr.io/addon-resizer:1.8.11],SizeBytes:9347950,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64@sha256:d83d8a481145d0eb71f8bd71ae236d1c6a931dd3bdcaf80919a8ec4a4d8aff74 k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0],SizeBytes:6362391,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Oct 25 16:58:28.345: INFO: 
Logging kubelet events for node bootstrap-e2e-minion-group-jzdr
Oct 25 16:58:28.384: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-jzdr
Oct 25 16:58:28.444: INFO: kube-proxy-bootstrap-e2e-minion-group-jzdr started at 2020-10-25 16:09:10 +0000 UTC (0+1 container statuses recorded)
Oct 25 16:58:28.444: INFO: 	Container kube-proxy ready: true, restart count 0
... skipping 6 lines ...
Oct 25 16:58:28.444: INFO: 	Container default-http-backend ready: true, restart count 0
W1025 16:58:28.487388  144261 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 25 16:58:28.616: INFO: 
Latency metrics for node bootstrap-e2e-minion-group-jzdr
Oct 25 16:58:28.616: INFO: 
Logging node info for node bootstrap-e2e-minion-group-nmms
Oct 25 16:58:28.654: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-nmms    e7feb765-8475-445a-830f-dcf8ac9a8e1e 13349 0 2020-10-25 16:09:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-nmms kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{node-problem-detector Update v1 2020-10-25 16:09:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2020-10-25 16:09:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}}} {e2e.test Update v1 2020-10-25 16:49:30 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2020-10-25 16:49:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/instance-type":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:config":{},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gci-gce-ingress1-5/us-west1-b/bootstrap-e2e-minion-group-nmms,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7823925248 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {<nil>} 3 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7561781248 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {<nil>} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-10-25 16:54:20 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-10-25 16:54:20 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-10-25 16:54:20 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-10-25 16:54:20 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-10-25 16:54:20 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-10-25 16:54:20 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-10-25 16:54:20 +0000 UTC,LastTransitionTime:2020-10-25 16:09:14 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-10-25 16:09:26 +0000 UTC,LastTransitionTime:2020-10-25 16:09:26 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-10-25 16:54:38 +0000 UTC,LastTransitionTime:2020-10-25 16:09:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-10-25 16:54:38 +0000 UTC,LastTransitionTime:2020-10-25 16:09:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-10-25 16:54:38 +0000 UTC,LastTransitionTime:2020-10-25 16:09:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-10-25 16:54:38 +0000 UTC,LastTransitionTime:2020-10-25 16:09:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:104.199.115.151,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-nmms.c.k8s-gci-gce-ingress1-5.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-nmms.c.k8s-gci-gce-ingress1-5.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:86f90c7516266eae29db0d31d2c655be,SystemUUID:86f90c75-1626-6eae-29db-0d31d2c655be,BootID:6a6c0dce-b236-402f-8b05-0c7f6ff15e50,KernelVersion:5.4.49+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.1,KubeletVersion:v1.20.0-alpha.3.114+5935fcd704fe89,KubeProxyVersion:v1.20.0-alpha.3.114+5935fcd704fe89,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.20.0-alpha.3.114_5935fcd704fe89],SizeBytes:140129137,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/sig-storage/snapshot-controller@sha256:36ca32433c069246ea8988a7b3dbdf0aabf8345be9122b8a25426e6c487878de k8s.gcr.io/sig-storage/snapshot-controller:v3.0.0],SizeBytes:17462937,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:be8875e5584750b7a490244ee56a121a714aa3d124164a5090cd8b3570c5650f k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.8.1],SizeBytes:15208262,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:13982350,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6],SizeBytes:10542830,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:9515805,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:35745de3c9a2884d53ad0e81b39f1eed9a7c77f5f909b9e84f9712b37ffb3021 k8s.gcr.io/addon-resizer:1.8.11],SizeBytes:9347950,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Oct 25 16:58:28.654: INFO: 
Logging kubelet events for node bootstrap-e2e-minion-group-nmms
Oct 25 16:58:28.702: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-nmms
Oct 25 16:58:28.763: INFO: kube-proxy-bootstrap-e2e-minion-group-nmms started at 2020-10-25 16:09:12 +0000 UTC (0+1 container statuses recorded)
Oct 25 16:58:28.764: INFO: 	Container kube-proxy ready: true, restart count 0
... skipping 20 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl server-side dry-run
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:909
    should check if kubectl can dry-run update Pods [Conformance] [It]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629

    Oct 25 16:58:27.560: Unexpected error:
        <exec.CodeExitError>: {
            Err: {
                s: "error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=kubectl-4807 replace -f - --dry-run=server:\nCommand stdout:\n\nstderr:\nError from server (Conflict): error when replacing \"STDIN\": Operation cannot be fulfilled on pods \"e2e-test-httpd-pod\": the object has been modified; please apply your changes to the latest version and try again\n\nerror:\nexit status 1",
            },
            Code: 1,
        }
        error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=kubectl-4807 replace -f - --dry-run=server:
        Command stdout:
        
        stderr:
        Error from server (Conflict): error when replacing "STDIN": Operation cannot be fulfilled on pods "e2e-test-httpd-pod": the object has been modified; please apply your changes to the latest version and try again
        
        error:
        exit status 1
    occurred

    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:598
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":306,"completed":175,"skipped":2986,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicationController
... skipping 13 lines ...
Oct 25 16:58:31.445: INFO: Trying to dial the pod
Oct 25 16:58:36.562: INFO: Controller my-hostname-basic-b3c0a264-a358-43d1-bae1-3a6e0fbce5c5: Got expected result from replica 1 [my-hostname-basic-b3c0a264-a358-43d1-bae1-3a6e0fbce5c5-vkhmg]: "my-hostname-basic-b3c0a264-a358-43d1-bae1-3a6e0fbce5c5-vkhmg", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:58:36.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9429" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":306,"completed":176,"skipped":3009,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 11 lines ...
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:59:02.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1377" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":306,"completed":177,"skipped":3053,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSS
------------------------------
[sig-network] Services 
  should test the lifecycle of an Endpoint [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 19 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:59:02.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-741" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":306,"completed":178,"skipped":3056,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:59:07.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5362" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":306,"completed":179,"skipped":3063,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicaSet
... skipping 11 lines ...
Oct 25 16:59:10.479: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 16:59:10.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1548" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":306,"completed":180,"skipped":3066,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 28 lines ...
Oct 25 17:00:41.040: INFO: Terminating ReplicationController wrapped-volume-race-471239b0-59de-40f9-80a2-f28c5386e8a1 pods took: 700.4609ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:01:23.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-6403" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":306,"completed":181,"skipped":3069,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-234d1ae8-e65d-4171-8d0f-7033a9124737
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:01:30.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6864" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":182,"skipped":3088,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Networking
... skipping 34 lines ...
Oct 25 17:01:52.942: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 25 17:01:54.265: INFO: Found all 1 expected endpoints: [netserver-2]
[AfterEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:01:54.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4377" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":183,"skipped":3102,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-edb46c69-e267-4415-beff-fb914edfeadc
STEP: Creating a pod to test consume configMaps
Oct 25 17:01:54.993: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4811e742-dd49-48bf-8c49-0a6222ce70f4" in namespace "projected-5719" to be "Succeeded or Failed"
Oct 25 17:01:55.033: INFO: Pod "pod-projected-configmaps-4811e742-dd49-48bf-8c49-0a6222ce70f4": Phase="Pending", Reason="", readiness=false. Elapsed: 39.656188ms
Oct 25 17:01:57.071: INFO: Pod "pod-projected-configmaps-4811e742-dd49-48bf-8c49-0a6222ce70f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.076911967s
STEP: Saw pod success
Oct 25 17:01:57.071: INFO: Pod "pod-projected-configmaps-4811e742-dd49-48bf-8c49-0a6222ce70f4" satisfied condition "Succeeded or Failed"
Oct 25 17:01:57.107: INFO: Trying to get logs from node bootstrap-e2e-minion-group-jzdr pod pod-projected-configmaps-4811e742-dd49-48bf-8c49-0a6222ce70f4 container agnhost-container: <nil>
STEP: delete the pod
Oct 25 17:01:57.236: INFO: Waiting for pod pod-projected-configmaps-4811e742-dd49-48bf-8c49-0a6222ce70f4 to disappear
Oct 25 17:01:57.279: INFO: Pod pod-projected-configmaps-4811e742-dd49-48bf-8c49-0a6222ce70f4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:01:57.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5719" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":306,"completed":184,"skipped":3119,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:01:59.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9074" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":185,"skipped":3136,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Oct 25 17:01:59.836: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
Oct 25 17:02:00.098: INFO: Waiting up to 5m0s for pod "downward-api-5558fe42-7860-4f09-a54f-a555ede8c5f4" in namespace "downward-api-1184" to be "Succeeded or Failed"
Oct 25 17:02:00.140: INFO: Pod "downward-api-5558fe42-7860-4f09-a54f-a555ede8c5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 41.22056ms
Oct 25 17:02:02.178: INFO: Pod "downward-api-5558fe42-7860-4f09-a54f-a555ede8c5f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.079517166s
STEP: Saw pod success
Oct 25 17:02:02.178: INFO: Pod "downward-api-5558fe42-7860-4f09-a54f-a555ede8c5f4" satisfied condition "Succeeded or Failed"
Oct 25 17:02:02.215: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downward-api-5558fe42-7860-4f09-a54f-a555ede8c5f4 container dapi-container: <nil>
STEP: delete the pod
Oct 25 17:02:02.407: INFO: Waiting for pod downward-api-5558fe42-7860-4f09-a54f-a555ede8c5f4 to disappear
Oct 25 17:02:02.444: INFO: Pod downward-api-5558fe42-7860-4f09-a54f-a555ede8c5f4 no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:02:02.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1184" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":306,"completed":186,"skipped":3173,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct 25 17:02:09.745: INFO: File wheezy_udp@dns-test-service-3.dns-3679.svc.cluster.local from pod  dns-3679/dns-test-66e48c5f-bafc-4913-881a-e10da0fb1fe3 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 25 17:02:09.792: INFO: File jessie_udp@dns-test-service-3.dns-3679.svc.cluster.local from pod  dns-3679/dns-test-66e48c5f-bafc-4913-881a-e10da0fb1fe3 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 25 17:02:09.792: INFO: Lookups using dns-3679/dns-test-66e48c5f-bafc-4913-881a-e10da0fb1fe3 failed for: [wheezy_udp@dns-test-service-3.dns-3679.svc.cluster.local jessie_udp@dns-test-service-3.dns-3679.svc.cluster.local]

Oct 25 17:02:14.860: INFO: File wheezy_udp@dns-test-service-3.dns-3679.svc.cluster.local from pod  dns-3679/dns-test-66e48c5f-bafc-4913-881a-e10da0fb1fe3 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 25 17:02:14.909: INFO: File jessie_udp@dns-test-service-3.dns-3679.svc.cluster.local from pod  dns-3679/dns-test-66e48c5f-bafc-4913-881a-e10da0fb1fe3 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 25 17:02:14.909: INFO: Lookups using dns-3679/dns-test-66e48c5f-bafc-4913-881a-e10da0fb1fe3 failed for: [wheezy_udp@dns-test-service-3.dns-3679.svc.cluster.local jessie_udp@dns-test-service-3.dns-3679.svc.cluster.local]

Oct 25 17:02:19.832: INFO: File wheezy_udp@dns-test-service-3.dns-3679.svc.cluster.local from pod  dns-3679/dns-test-66e48c5f-bafc-4913-881a-e10da0fb1fe3 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 25 17:02:19.871: INFO: File jessie_udp@dns-test-service-3.dns-3679.svc.cluster.local from pod  dns-3679/dns-test-66e48c5f-bafc-4913-881a-e10da0fb1fe3 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 25 17:02:19.871: INFO: Lookups using dns-3679/dns-test-66e48c5f-bafc-4913-881a-e10da0fb1fe3 failed for: [wheezy_udp@dns-test-service-3.dns-3679.svc.cluster.local jessie_udp@dns-test-service-3.dns-3679.svc.cluster.local]

Oct 25 17:02:24.834: INFO: File wheezy_udp@dns-test-service-3.dns-3679.svc.cluster.local from pod  dns-3679/dns-test-66e48c5f-bafc-4913-881a-e10da0fb1fe3 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 25 17:02:24.874: INFO: File jessie_udp@dns-test-service-3.dns-3679.svc.cluster.local from pod  dns-3679/dns-test-66e48c5f-bafc-4913-881a-e10da0fb1fe3 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 25 17:02:24.874: INFO: Lookups using dns-3679/dns-test-66e48c5f-bafc-4913-881a-e10da0fb1fe3 failed for: [wheezy_udp@dns-test-service-3.dns-3679.svc.cluster.local jessie_udp@dns-test-service-3.dns-3679.svc.cluster.local]

Oct 25 17:02:29.880: INFO: File wheezy_udp@dns-test-service-3.dns-3679.svc.cluster.local from pod  dns-3679/dns-test-66e48c5f-bafc-4913-881a-e10da0fb1fe3 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 25 17:02:29.961: INFO: File jessie_udp@dns-test-service-3.dns-3679.svc.cluster.local from pod  dns-3679/dns-test-66e48c5f-bafc-4913-881a-e10da0fb1fe3 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 25 17:02:29.961: INFO: Lookups using dns-3679/dns-test-66e48c5f-bafc-4913-881a-e10da0fb1fe3 failed for: [wheezy_udp@dns-test-service-3.dns-3679.svc.cluster.local jessie_udp@dns-test-service-3.dns-3679.svc.cluster.local]

Oct 25 17:02:34.873: INFO: DNS probes using dns-test-66e48c5f-bafc-4913-881a-e10da0fb1fe3 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3679.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3679.svc.cluster.local; sleep 1; done
... skipping 2 lines ...

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct 25 17:02:37.408: INFO: File jessie_udp@dns-test-service-3.dns-3679.svc.cluster.local from pod  dns-3679/dns-test-9f2269f9-f9f3-4f07-902f-e14650e32305 contains '' instead of '10.0.6.71'
Oct 25 17:02:37.408: INFO: Lookups using dns-3679/dns-test-9f2269f9-f9f3-4f07-902f-e14650e32305 failed for: [jessie_udp@dns-test-service-3.dns-3679.svc.cluster.local]

Oct 25 17:02:42.540: INFO: DNS probes using dns-test-9f2269f9-f9f3-4f07-902f-e14650e32305 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:02:42.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3679" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":306,"completed":187,"skipped":3183,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Oct 25 17:02:43.014: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test env composition
Oct 25 17:02:43.510: INFO: Waiting up to 5m0s for pod "var-expansion-140b99df-52ab-4add-9971-fde5781d9867" in namespace "var-expansion-1371" to be "Succeeded or Failed"
Oct 25 17:02:43.567: INFO: Pod "var-expansion-140b99df-52ab-4add-9971-fde5781d9867": Phase="Pending", Reason="", readiness=false. Elapsed: 56.643834ms
Oct 25 17:02:45.603: INFO: Pod "var-expansion-140b99df-52ab-4add-9971-fde5781d9867": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.093343948s
STEP: Saw pod success
Oct 25 17:02:45.603: INFO: Pod "var-expansion-140b99df-52ab-4add-9971-fde5781d9867" satisfied condition "Succeeded or Failed"
Oct 25 17:02:45.639: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod var-expansion-140b99df-52ab-4add-9971-fde5781d9867 container dapi-container: <nil>
STEP: delete the pod
Oct 25 17:02:45.725: INFO: Waiting for pod var-expansion-140b99df-52ab-4add-9971-fde5781d9867 to disappear
Oct 25 17:02:45.762: INFO: Pod var-expansion-140b99df-52ab-4add-9971-fde5781d9867 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:02:45.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1371" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":306,"completed":188,"skipped":3204,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] IngressClass API 
   should support creating IngressClass API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] IngressClass API
... skipping 21 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] IngressClass API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:02:46.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingressclass-2474" for this suite.
•{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":306,"completed":189,"skipped":3219,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 15 lines ...
Oct 25 17:02:53.150: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:03:06.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7907" for this suite.
STEP: Destroying namespace "webhook-7907-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":306,"completed":190,"skipped":3219,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-209811a5-57b9-4e4a-9f14-660fbd541671
STEP: Creating a pod to test consume configMaps
Oct 25 17:03:07.624: INFO: Waiting up to 5m0s for pod "pod-configmaps-3cc32044-918d-4dd6-bd13-c5139a86fe4f" in namespace "configmap-3842" to be "Succeeded or Failed"
Oct 25 17:03:07.660: INFO: Pod "pod-configmaps-3cc32044-918d-4dd6-bd13-c5139a86fe4f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.040518ms
Oct 25 17:03:09.698: INFO: Pod "pod-configmaps-3cc32044-918d-4dd6-bd13-c5139a86fe4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.074173028s
STEP: Saw pod success
Oct 25 17:03:09.698: INFO: Pod "pod-configmaps-3cc32044-918d-4dd6-bd13-c5139a86fe4f" satisfied condition "Succeeded or Failed"
Oct 25 17:03:09.735: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-configmaps-3cc32044-918d-4dd6-bd13-c5139a86fe4f container configmap-volume-test: <nil>
STEP: delete the pod
Oct 25 17:03:09.834: INFO: Waiting for pod pod-configmaps-3cc32044-918d-4dd6-bd13-c5139a86fe4f to disappear
Oct 25 17:03:09.871: INFO: Pod pod-configmaps-3cc32044-918d-4dd6-bd13-c5139a86fe4f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:03:09.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3842" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":191,"skipped":3258,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 11 lines ...
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:03:10.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-821" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":306,"completed":192,"skipped":3261,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Oct 25 17:03:10.806: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
Oct 25 17:03:11.039: INFO: Waiting up to 5m0s for pod "downward-api-f283880b-6a49-4729-9091-488dae2d17b7" in namespace "downward-api-481" to be "Succeeded or Failed"
Oct 25 17:03:11.084: INFO: Pod "downward-api-f283880b-6a49-4729-9091-488dae2d17b7": Phase="Pending", Reason="", readiness=false. Elapsed: 44.808525ms
Oct 25 17:03:13.239: INFO: Pod "downward-api-f283880b-6a49-4729-9091-488dae2d17b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.199700957s
STEP: Saw pod success
Oct 25 17:03:13.239: INFO: Pod "downward-api-f283880b-6a49-4729-9091-488dae2d17b7" satisfied condition "Succeeded or Failed"
Oct 25 17:03:13.276: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downward-api-f283880b-6a49-4729-9091-488dae2d17b7 container dapi-container: <nil>
STEP: delete the pod
Oct 25 17:03:13.393: INFO: Waiting for pod downward-api-f283880b-6a49-4729-9091-488dae2d17b7 to disappear
Oct 25 17:03:13.430: INFO: Pod downward-api-f283880b-6a49-4729-9091-488dae2d17b7 no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:03:13.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-481" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":306,"completed":193,"skipped":3263,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath 
  runs ReplicaSets to verify preemption running path [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 32 lines ...
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:04:43.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-7691" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":306,"completed":194,"skipped":3275,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Oct 25 17:04:47.052: INFO: Successfully updated pod "annotationupdate2f90571e-ffd1-416d-a357-29ec123cba1e"
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:04:49.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6212" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":306,"completed":195,"skipped":3305,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Watchers
... skipping 22 lines ...
Oct 25 17:05:01.252: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-891  8f72da17-630e-4e26-8034-935e712fecca 15590 0 2020-10-25 17:04:50 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-10-25 17:04:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Oct 25 17:05:01.252: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-891  8f72da17-630e-4e26-8034-935e712fecca 15591 0 2020-10-25 17:04:50 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-10-25 17:04:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:05:01.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-891" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":306,"completed":196,"skipped":3333,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
STEP: Updating configmap projected-configmap-test-upd-316a6673-2f0b-4ec6-8575-c437bf2d6ef6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:06:20.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9067" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":197,"skipped":3336,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 25 17:06:20.475: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir volume type on tmpfs
Oct 25 17:06:20.718: INFO: Waiting up to 5m0s for pod "pod-6a528a5a-a04b-47c3-96d2-51e204748db2" in namespace "emptydir-128" to be "Succeeded or Failed"
Oct 25 17:06:20.756: INFO: Pod "pod-6a528a5a-a04b-47c3-96d2-51e204748db2": Phase="Pending", Reason="", readiness=false. Elapsed: 37.534293ms
Oct 25 17:06:22.793: INFO: Pod "pod-6a528a5a-a04b-47c3-96d2-51e204748db2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075010798s
Oct 25 17:06:24.831: INFO: Pod "pod-6a528a5a-a04b-47c3-96d2-51e204748db2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11262037s
STEP: Saw pod success
Oct 25 17:06:24.831: INFO: Pod "pod-6a528a5a-a04b-47c3-96d2-51e204748db2" satisfied condition "Succeeded or Failed"
Oct 25 17:06:24.869: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-6a528a5a-a04b-47c3-96d2-51e204748db2 container test-container: <nil>
STEP: delete the pod
Oct 25 17:06:24.960: INFO: Waiting for pod pod-6a528a5a-a04b-47c3-96d2-51e204748db2 to disappear
Oct 25 17:06:24.998: INFO: Pod pod-6a528a5a-a04b-47c3-96d2-51e204748db2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:06:24.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-128" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":198,"skipped":3339,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Aggregator
... skipping 18 lines ...
[AfterEach] [sig-api-machinery] Aggregator
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:06:44.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-438" for this suite.
•{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":306,"completed":199,"skipped":3342,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 8 lines ...
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:06:52.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-830" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":306,"completed":200,"skipped":3343,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-0d1237dd-a043-4eb9-9033-aee424aec0dd
STEP: Creating a pod to test consume secrets
Oct 25 17:06:52.564: INFO: Waiting up to 5m0s for pod "pod-secrets-9908b7c9-ad1b-4598-9e11-f6e0b0c245f2" in namespace "secrets-5729" to be "Succeeded or Failed"
Oct 25 17:06:52.604: INFO: Pod "pod-secrets-9908b7c9-ad1b-4598-9e11-f6e0b0c245f2": Phase="Pending", Reason="", readiness=false. Elapsed: 40.174282ms
Oct 25 17:06:54.642: INFO: Pod "pod-secrets-9908b7c9-ad1b-4598-9e11-f6e0b0c245f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.077786249s
STEP: Saw pod success
Oct 25 17:06:54.642: INFO: Pod "pod-secrets-9908b7c9-ad1b-4598-9e11-f6e0b0c245f2" satisfied condition "Succeeded or Failed"
Oct 25 17:06:54.679: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-secrets-9908b7c9-ad1b-4598-9e11-f6e0b0c245f2 container secret-volume-test: <nil>
STEP: delete the pod
Oct 25 17:06:54.778: INFO: Waiting for pod pod-secrets-9908b7c9-ad1b-4598-9e11-f6e0b0c245f2 to disappear
Oct 25 17:06:54.815: INFO: Pod pod-secrets-9908b7c9-ad1b-4598-9e11-f6e0b0c245f2 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:06:54.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5729" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":306,"completed":201,"skipped":3353,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:07:00.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4445" for this suite.
STEP: Destroying namespace "webhook-4445-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":306,"completed":202,"skipped":3355,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Oct 25 17:07:03.089: INFO: Initial restart count of pod test-webserver-dc52bbf1-9f71-48d1-bf0a-bf01f5a4d5c8 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:11:04.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4851" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":306,"completed":203,"skipped":3364,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Runtime
... skipping 20 lines ...
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:11:27.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4030" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":306,"completed":204,"skipped":3410,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
S
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:11:30.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4688" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":306,"completed":205,"skipped":3411,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap configmap-6125/configmap-test-a7f78ab4-fbd8-49bb-8f78-8fc9d9f1bfc2
STEP: Creating a pod to test consume configMaps
Oct 25 17:11:30.657: INFO: Waiting up to 5m0s for pod "pod-configmaps-7511d4d2-a5c7-47f5-affe-8617b0c3b4bc" in namespace "configmap-6125" to be "Succeeded or Failed"
Oct 25 17:11:30.739: INFO: Pod "pod-configmaps-7511d4d2-a5c7-47f5-affe-8617b0c3b4bc": Phase="Pending", Reason="", readiness=false. Elapsed: 81.341098ms
Oct 25 17:11:32.776: INFO: Pod "pod-configmaps-7511d4d2-a5c7-47f5-affe-8617b0c3b4bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.118573345s
STEP: Saw pod success
Oct 25 17:11:32.776: INFO: Pod "pod-configmaps-7511d4d2-a5c7-47f5-affe-8617b0c3b4bc" satisfied condition "Succeeded or Failed"
Oct 25 17:11:32.813: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-configmaps-7511d4d2-a5c7-47f5-affe-8617b0c3b4bc container env-test: <nil>
STEP: delete the pod
Oct 25 17:11:32.925: INFO: Waiting for pod pod-configmaps-7511d4d2-a5c7-47f5-affe-8617b0c3b4bc to disappear
Oct 25 17:11:32.966: INFO: Pod pod-configmaps-7511d4d2-a5c7-47f5-affe-8617b0c3b4bc no longer exists
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:11:32.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6125" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":306,"completed":206,"skipped":3431,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 25 17:11:33.069: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 25 17:11:33.426: INFO: Waiting up to 5m0s for pod "pod-6a86d179-d112-4ab1-bc05-8cd3e1b8e4cd" in namespace "emptydir-6924" to be "Succeeded or Failed"
Oct 25 17:11:33.508: INFO: Pod "pod-6a86d179-d112-4ab1-bc05-8cd3e1b8e4cd": Phase="Pending", Reason="", readiness=false. Elapsed: 82.806722ms
Oct 25 17:11:35.552: INFO: Pod "pod-6a86d179-d112-4ab1-bc05-8cd3e1b8e4cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.126319146s
STEP: Saw pod success
Oct 25 17:11:35.552: INFO: Pod "pod-6a86d179-d112-4ab1-bc05-8cd3e1b8e4cd" satisfied condition "Succeeded or Failed"
Oct 25 17:11:35.588: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-6a86d179-d112-4ab1-bc05-8cd3e1b8e4cd container test-container: <nil>
STEP: delete the pod
Oct 25 17:11:35.673: INFO: Waiting for pod pod-6a86d179-d112-4ab1-bc05-8cd3e1b8e4cd to disappear
Oct 25 17:11:35.711: INFO: Pod pod-6a86d179-d112-4ab1-bc05-8cd3e1b8e4cd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:11:35.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6924" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":207,"skipped":3446,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 7 lines ...
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:12:36.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2764" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":306,"completed":208,"skipped":3449,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-map-85425e38-c263-4b32-a02c-488e08588ce7
STEP: Creating a pod to test consume configMaps
Oct 25 17:12:36.417: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-90a81a68-ac99-4fbc-83e9-59d8bf2bf26b" in namespace "projected-9420" to be "Succeeded or Failed"
Oct 25 17:12:36.458: INFO: Pod "pod-projected-configmaps-90a81a68-ac99-4fbc-83e9-59d8bf2bf26b": Phase="Pending", Reason="", readiness=false. Elapsed: 41.395504ms
Oct 25 17:12:38.507: INFO: Pod "pod-projected-configmaps-90a81a68-ac99-4fbc-83e9-59d8bf2bf26b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.089680724s
STEP: Saw pod success
Oct 25 17:12:38.507: INFO: Pod "pod-projected-configmaps-90a81a68-ac99-4fbc-83e9-59d8bf2bf26b" satisfied condition "Succeeded or Failed"
Oct 25 17:12:38.559: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-projected-configmaps-90a81a68-ac99-4fbc-83e9-59d8bf2bf26b container agnhost-container: <nil>
STEP: delete the pod
Oct 25 17:12:38.773: INFO: Waiting for pod pod-projected-configmaps-90a81a68-ac99-4fbc-83e9-59d8bf2bf26b to disappear
Oct 25 17:12:38.809: INFO: Pod pod-projected-configmaps-90a81a68-ac99-4fbc-83e9-59d8bf2bf26b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:12:38.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9420" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":306,"completed":209,"skipped":3471,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 16 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:12:52.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7478" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":306,"completed":210,"skipped":3482,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-map-ad707122-2c9f-44c8-9348-e31f807de9ab
STEP: Creating a pod to test consume configMaps
Oct 25 17:12:53.540: INFO: Waiting up to 5m0s for pod "pod-configmaps-135191a7-6fcd-4366-848e-3efdfa275f3e" in namespace "configmap-8549" to be "Succeeded or Failed"
Oct 25 17:12:53.581: INFO: Pod "pod-configmaps-135191a7-6fcd-4366-848e-3efdfa275f3e": Phase="Pending", Reason="", readiness=false. Elapsed: 40.616812ms
Oct 25 17:12:55.619: INFO: Pod "pod-configmaps-135191a7-6fcd-4366-848e-3efdfa275f3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.078727247s
STEP: Saw pod success
Oct 25 17:12:55.619: INFO: Pod "pod-configmaps-135191a7-6fcd-4366-848e-3efdfa275f3e" satisfied condition "Succeeded or Failed"
Oct 25 17:12:55.656: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-configmaps-135191a7-6fcd-4366-848e-3efdfa275f3e container configmap-volume-test: <nil>
STEP: delete the pod
Oct 25 17:12:55.777: INFO: Waiting for pod pod-configmaps-135191a7-6fcd-4366-848e-3efdfa275f3e to disappear
Oct 25 17:12:55.815: INFO: Pod pod-configmaps-135191a7-6fcd-4366-848e-3efdfa275f3e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:12:55.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8549" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":306,"completed":211,"skipped":3486,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
S
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] 
  validates lower priority pod preemption by critical pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 17 lines ...
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:14:13.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-6747" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":306,"completed":212,"skipped":3487,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Oct 25 17:14:14.209: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:14:20.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6774" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":306,"completed":213,"skipped":3522,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}

------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 25 17:14:21.023: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 25 17:14:21.294: INFO: Waiting up to 5m0s for pod "pod-2cb3131a-476c-4c59-b2e8-fe561e1ae133" in namespace "emptydir-726" to be "Succeeded or Failed"
Oct 25 17:14:21.368: INFO: Pod "pod-2cb3131a-476c-4c59-b2e8-fe561e1ae133": Phase="Pending", Reason="", readiness=false. Elapsed: 73.899677ms
Oct 25 17:14:23.409: INFO: Pod "pod-2cb3131a-476c-4c59-b2e8-fe561e1ae133": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114405466s
Oct 25 17:14:25.446: INFO: Pod "pod-2cb3131a-476c-4c59-b2e8-fe561e1ae133": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151170518s
STEP: Saw pod success
Oct 25 17:14:25.446: INFO: Pod "pod-2cb3131a-476c-4c59-b2e8-fe561e1ae133" satisfied condition "Succeeded or Failed"
Oct 25 17:14:25.483: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-2cb3131a-476c-4c59-b2e8-fe561e1ae133 container test-container: <nil>
STEP: delete the pod
Oct 25 17:14:25.569: INFO: Waiting for pod pod-2cb3131a-476c-4c59-b2e8-fe561e1ae133 to disappear
Oct 25 17:14:25.605: INFO: Pod pod-2cb3131a-476c-4c59-b2e8-fe561e1ae133 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:14:25.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-726" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":214,"skipped":3522,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 25 17:14:25.684: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 25 17:14:27.985: INFO: Deleting pod "var-expansion-5a76eade-cad2-4ba8-b6d0-52873a19238b" in namespace "var-expansion-1194"
Oct 25 17:14:28.029: INFO: Wait up to 5m0s for pod "var-expansion-5a76eade-cad2-4ba8-b6d0-52873a19238b" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:15:22.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1194" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":306,"completed":215,"skipped":3528,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] [sig-node] PreStop
... skipping 25 lines ...
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:15:32.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-4114" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":306,"completed":216,"skipped":3559,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-ce4fec51-9390-45d6-8a22-37d1fc79c5a9
STEP: Creating a pod to test consume secrets
Oct 25 17:15:32.480: INFO: Waiting up to 5m0s for pod "pod-secrets-77010cba-5490-4663-b019-802ac312b288" in namespace "secrets-4315" to be "Succeeded or Failed"
Oct 25 17:15:32.546: INFO: Pod "pod-secrets-77010cba-5490-4663-b019-802ac312b288": Phase="Pending", Reason="", readiness=false. Elapsed: 65.587977ms
Oct 25 17:15:34.596: INFO: Pod "pod-secrets-77010cba-5490-4663-b019-802ac312b288": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.116076925s
STEP: Saw pod success
Oct 25 17:15:34.596: INFO: Pod "pod-secrets-77010cba-5490-4663-b019-802ac312b288" satisfied condition "Succeeded or Failed"
Oct 25 17:15:34.651: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-secrets-77010cba-5490-4663-b019-802ac312b288 container secret-volume-test: <nil>
STEP: delete the pod
Oct 25 17:15:34.885: INFO: Waiting for pod pod-secrets-77010cba-5490-4663-b019-802ac312b288 to disappear
Oct 25 17:15:34.945: INFO: Pod pod-secrets-77010cba-5490-4663-b019-802ac312b288 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:15:34.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4315" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":217,"skipped":3562,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 14 lines ...
STEP: verifying the updated pod is in kubernetes
Oct 25 17:15:38.625: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:15:38.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8033" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":306,"completed":218,"skipped":3595,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 9 lines ...
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:15:39.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-6870" for this suite.
STEP: Destroying namespace "nspatchtest-9b249cfc-62cb-4dfd-ae21-28c70a8d0448-8938" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":306,"completed":219,"skipped":3602,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should run through a ConfigMap lifecycle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] ConfigMap
... skipping 11 lines ...
STEP: deleting the ConfigMap by collection with a label selector
STEP: listing all ConfigMaps in test namespace
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:15:39.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8081" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":306,"completed":220,"skipped":3610,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
... skipping 18 lines ...
STEP: Deleting second CR
Oct 25 17:16:32.220: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-25T17:15:52Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-25T17:16:12Z]] name:name2 resourceVersion:17554 uid:f2faeb52-23ac-4deb-9a0a-9ac248b86fa8] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:16:42.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-6460" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":306,"completed":221,"skipped":3618,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Oct 25 17:17:32.518: INFO: Restart count of pod container-probe-6572/busybox-40052e79-1e37-4e6d-8f1e-9859c519aea5 is now 1 (47.172376187s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:17:32.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6572" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":306,"completed":222,"skipped":3621,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name projected-secret-test-7f2d3ba8-4cfd-494b-8bcd-18346f5694d3
STEP: Creating a pod to test consume secrets
Oct 25 17:17:32.955: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3e9028dd-0e52-4dc4-b458-47e6434c1ef8" in namespace "projected-7513" to be "Succeeded or Failed"
Oct 25 17:17:33.005: INFO: Pod "pod-projected-secrets-3e9028dd-0e52-4dc4-b458-47e6434c1ef8": Phase="Pending", Reason="", readiness=false. Elapsed: 49.730274ms
Oct 25 17:17:35.043: INFO: Pod "pod-projected-secrets-3e9028dd-0e52-4dc4-b458-47e6434c1ef8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.087284126s
STEP: Saw pod success
Oct 25 17:17:35.043: INFO: Pod "pod-projected-secrets-3e9028dd-0e52-4dc4-b458-47e6434c1ef8" satisfied condition "Succeeded or Failed"
Oct 25 17:17:35.079: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-projected-secrets-3e9028dd-0e52-4dc4-b458-47e6434c1ef8 container secret-volume-test: <nil>
STEP: delete the pod
Oct 25 17:17:35.176: INFO: Waiting for pod pod-projected-secrets-3e9028dd-0e52-4dc4-b458-47e6434c1ef8 to disappear
Oct 25 17:17:35.213: INFO: Pod pod-projected-secrets-3e9028dd-0e52-4dc4-b458-47e6434c1ef8 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:17:35.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7513" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":306,"completed":223,"skipped":3625,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-77a856e6-9620-4530-af19-e4642fcf3c6a
STEP: Creating a pod to test consume configMaps
Oct 25 17:17:35.553: INFO: Waiting up to 5m0s for pod "pod-configmaps-f86ffacb-f0f4-4958-a8c9-6ff0d502e0fa" in namespace "configmap-3489" to be "Succeeded or Failed"
Oct 25 17:17:35.602: INFO: Pod "pod-configmaps-f86ffacb-f0f4-4958-a8c9-6ff0d502e0fa": Phase="Pending", Reason="", readiness=false. Elapsed: 49.387476ms
Oct 25 17:17:37.660: INFO: Pod "pod-configmaps-f86ffacb-f0f4-4958-a8c9-6ff0d502e0fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.106760295s
STEP: Saw pod success
Oct 25 17:17:37.660: INFO: Pod "pod-configmaps-f86ffacb-f0f4-4958-a8c9-6ff0d502e0fa" satisfied condition "Succeeded or Failed"
Oct 25 17:17:37.724: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-configmaps-f86ffacb-f0f4-4958-a8c9-6ff0d502e0fa container configmap-volume-test: <nil>
STEP: delete the pod
Oct 25 17:17:37.976: INFO: Waiting for pod pod-configmaps-f86ffacb-f0f4-4958-a8c9-6ff0d502e0fa to disappear
Oct 25 17:17:38.082: INFO: Pod pod-configmaps-f86ffacb-f0f4-4958-a8c9-6ff0d502e0fa no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:17:38.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3489" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":306,"completed":224,"skipped":3654,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Oct 25 17:17:43.013: INFO: Initial restart count of pod busybox-ac5080eb-e5e8-473a-9782-3322ccd67654 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:21:44.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4892" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":306,"completed":225,"skipped":3659,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 36 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:21:48.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9524" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":306,"completed":226,"skipped":3663,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 12 lines ...
Oct 25 17:21:51.326: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 25 17:21:51.609: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:21:51.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6310" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":306,"completed":227,"skipped":3675,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 25 17:21:51.689: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap that has name configmap-test-emptyKey-afd2c0f8-8c4c-4ac0-a3a4-d4110953dce0
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:21:52.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7775" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":306,"completed":228,"skipped":3687,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 160 lines ...
Oct 25 17:21:55.045: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=kubectl-5899 create -f -'
Oct 25 17:21:55.456: INFO: stderr: ""
Oct 25 17:21:55.456: INFO: stdout: "deployment.apps/agnhost-replica created\n"
STEP: validating guestbook app
Oct 25 17:21:55.456: INFO: Waiting for all frontend pods to be Running.
Oct 25 17:22:00.556: INFO: Waiting for frontend to serve content.
Oct 25 17:22:01.659: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
Oct 25 17:22:06.706: INFO: Trying to add a new entry to the guestbook.
Oct 25 17:22:06.755: INFO: Verifying that added entry can be retrieved.
Oct 25 17:22:06.799: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}
STEP: using delete to clean up resources
Oct 25 17:22:11.849: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=kubectl-5899 delete --grace-period=0 --force -f -'
Oct 25 17:22:12.109: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct 25 17:22:12.109: INFO: stdout: "service \"agnhost-replica\" force deleted\n"
STEP: using delete to clean up resources
Oct 25 17:22:12.109: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=kubectl-5899 delete --grace-period=0 --force -f -'
... skipping 16 lines ...
Oct 25 17:22:13.469: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct 25 17:22:13.469: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:22:13.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5899" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":306,"completed":229,"skipped":3762,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-d8602438-b301-4816-81a3-45a66756d102
STEP: Creating a pod to test consume secrets
Oct 25 17:22:14.189: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-23093b90-5c6b-452e-9d67-c485d502c662" in namespace "projected-4068" to be "Succeeded or Failed"
Oct 25 17:22:14.260: INFO: Pod "pod-projected-secrets-23093b90-5c6b-452e-9d67-c485d502c662": Phase="Pending", Reason="", readiness=false. Elapsed: 71.071925ms
Oct 25 17:22:16.297: INFO: Pod "pod-projected-secrets-23093b90-5c6b-452e-9d67-c485d502c662": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10831671s
Oct 25 17:22:18.337: INFO: Pod "pod-projected-secrets-23093b90-5c6b-452e-9d67-c485d502c662": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.148205609s
STEP: Saw pod success
Oct 25 17:22:18.337: INFO: Pod "pod-projected-secrets-23093b90-5c6b-452e-9d67-c485d502c662" satisfied condition "Succeeded or Failed"
Oct 25 17:22:18.379: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-projected-secrets-23093b90-5c6b-452e-9d67-c485d502c662 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 25 17:22:18.650: INFO: Waiting for pod pod-projected-secrets-23093b90-5c6b-452e-9d67-c485d502c662 to disappear
Oct 25 17:22:18.728: INFO: Pod pod-projected-secrets-23093b90-5c6b-452e-9d67-c485d502c662 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:22:18.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4068" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":230,"skipped":3775,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Oct 25 17:22:21.796: INFO: Initial restart count of pod liveness-81fc0856-366a-4e8b-990d-5eb3d074ef4a is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:26:23.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4744" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":306,"completed":231,"skipped":3781,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-secret-8z2q
STEP: Creating a pod to test atomic-volume-subpath
Oct 25 17:26:24.421: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-8z2q" in namespace "subpath-3571" to be "Succeeded or Failed"
Oct 25 17:26:24.458: INFO: Pod "pod-subpath-test-secret-8z2q": Phase="Pending", Reason="", readiness=false. Elapsed: 36.804465ms
Oct 25 17:26:26.497: INFO: Pod "pod-subpath-test-secret-8z2q": Phase="Running", Reason="", readiness=true. Elapsed: 2.075263559s
Oct 25 17:26:28.534: INFO: Pod "pod-subpath-test-secret-8z2q": Phase="Running", Reason="", readiness=true. Elapsed: 4.112471943s
Oct 25 17:26:30.572: INFO: Pod "pod-subpath-test-secret-8z2q": Phase="Running", Reason="", readiness=true. Elapsed: 6.150084012s
Oct 25 17:26:32.610: INFO: Pod "pod-subpath-test-secret-8z2q": Phase="Running", Reason="", readiness=true. Elapsed: 8.188457s
Oct 25 17:26:34.647: INFO: Pod "pod-subpath-test-secret-8z2q": Phase="Running", Reason="", readiness=true. Elapsed: 10.225442244s
Oct 25 17:26:36.684: INFO: Pod "pod-subpath-test-secret-8z2q": Phase="Running", Reason="", readiness=true. Elapsed: 12.262559163s
Oct 25 17:26:38.721: INFO: Pod "pod-subpath-test-secret-8z2q": Phase="Running", Reason="", readiness=true. Elapsed: 14.299444305s
Oct 25 17:26:40.902: INFO: Pod "pod-subpath-test-secret-8z2q": Phase="Running", Reason="", readiness=true. Elapsed: 16.480068519s
Oct 25 17:26:42.940: INFO: Pod "pod-subpath-test-secret-8z2q": Phase="Running", Reason="", readiness=true. Elapsed: 18.517985259s
Oct 25 17:26:44.977: INFO: Pod "pod-subpath-test-secret-8z2q": Phase="Running", Reason="", readiness=true. Elapsed: 20.555463069s
Oct 25 17:26:47.014: INFO: Pod "pod-subpath-test-secret-8z2q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.592367428s
STEP: Saw pod success
Oct 25 17:26:47.014: INFO: Pod "pod-subpath-test-secret-8z2q" satisfied condition "Succeeded or Failed"
Oct 25 17:26:47.051: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-subpath-test-secret-8z2q container test-container-subpath-secret-8z2q: <nil>
STEP: delete the pod
Oct 25 17:26:47.346: INFO: Waiting for pod pod-subpath-test-secret-8z2q to disappear
Oct 25 17:26:47.383: INFO: Pod pod-subpath-test-secret-8z2q no longer exists
STEP: Deleting pod pod-subpath-test-secret-8z2q
Oct 25 17:26:47.383: INFO: Deleting pod "pod-subpath-test-secret-8z2q" in namespace "subpath-3571"
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:26:47.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3571" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":306,"completed":232,"skipped":3783,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Oct 25 17:26:47.807: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:26:51.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9492" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":306,"completed":233,"skipped":3795,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
S
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
STEP: creating replication controller nodeport-test in namespace services-8420
I1025 17:26:51.499412  144261 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-8420, replica count: 2
Oct 25 17:26:54.550: INFO: Creating new exec pod
I1025 17:26:54.549999  144261 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 25 17:26:57.785: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-8420 exec execpodtqhw5 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Oct 25 17:26:59.475: INFO: rc: 1
Oct 25 17:26:59.475: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-8420 exec execpodtqhw5 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 nodeport-test 80
nc: connect to nodeport-test port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 25 17:27:00.475: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-8420 exec execpodtqhw5 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Oct 25 17:27:02.167: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n"
Oct 25 17:27:02.167: INFO: stdout: ""
Oct 25 17:27:02.167: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-8420 exec execpodtqhw5 -- /bin/sh -x -c nc -zv -t -w 2 10.0.55.62 80'
... skipping 14 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:27:04.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8420" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":306,"completed":234,"skipped":3796,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Updating configmap configmap-test-upd-3a83f809-cf63-4d9d-80dd-317ffdb4005b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:27:09.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9596" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":235,"skipped":3820,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 17 lines ...
STEP: creating replication controller affinity-clusterip-timeout in namespace services-7113
I1025 17:27:12.764122  144261 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-7113, replica count: 3
I1025 17:27:15.864792  144261 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 25 17:27:15.938: INFO: Creating new exec pod
Oct 25 17:27:19.084: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-7113 exec execpod-affinitylhdpp -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Oct 25 17:27:21.579: INFO: rc: 1
Oct 25 17:27:21.579: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-7113 exec execpod-affinitylhdpp -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 25 17:27:22.579: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-7113 exec execpod-affinitylhdpp -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Oct 25 17:27:23.086: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n"
Oct 25 17:27:23.086: INFO: stdout: ""
Oct 25 17:27:23.087: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-7113 exec execpod-affinitylhdpp -- /bin/sh -x -c nc -zv -t -w 2 10.0.249.34 80'
... skipping 34 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:28:21.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7113" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":306,"completed":236,"skipped":3874,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 25 17:28:21.499: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05d41c9a-a373-4b23-9222-37a7a2f55433" in namespace "projected-6045" to be "Succeeded or Failed"
Oct 25 17:28:21.535: INFO: Pod "downwardapi-volume-05d41c9a-a373-4b23-9222-37a7a2f55433": Phase="Pending", Reason="", readiness=false. Elapsed: 36.038146ms
Oct 25 17:28:23.573: INFO: Pod "downwardapi-volume-05d41c9a-a373-4b23-9222-37a7a2f55433": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.073698545s
STEP: Saw pod success
Oct 25 17:28:23.573: INFO: Pod "downwardapi-volume-05d41c9a-a373-4b23-9222-37a7a2f55433" satisfied condition "Succeeded or Failed"
Oct 25 17:28:23.610: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downwardapi-volume-05d41c9a-a373-4b23-9222-37a7a2f55433 container client-container: <nil>
STEP: delete the pod
Oct 25 17:28:23.713: INFO: Waiting for pod downwardapi-volume-05d41c9a-a373-4b23-9222-37a7a2f55433 to disappear
Oct 25 17:28:23.749: INFO: Pod downwardapi-volume-05d41c9a-a373-4b23-9222-37a7a2f55433 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:28:23.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6045" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":306,"completed":237,"skipped":3892,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 51 lines ...
Oct 25 17:28:40.960: INFO: stderr: ""
Oct 25 17:28:40.960: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:28:40.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7634" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":306,"completed":238,"skipped":3902,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 60 lines ...
• [SLOW TEST:307.304 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":306,"completed":239,"skipped":3907,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:33:50.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6321" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":240,"skipped":3915,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:34:16.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-674" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":306,"completed":241,"skipped":3946,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] [sig-node] Pods Extended
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:34:16.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5474" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":306,"completed":242,"skipped":3962,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 25 17:34:16.826: INFO: Waiting up to 5m0s for pod "downwardapi-volume-932d3331-833d-44fa-bb85-c7eed35aabbc" in namespace "projected-7025" to be "Succeeded or Failed"
Oct 25 17:34:16.866: INFO: Pod "downwardapi-volume-932d3331-833d-44fa-bb85-c7eed35aabbc": Phase="Pending", Reason="", readiness=false. Elapsed: 40.48824ms
Oct 25 17:34:18.935: INFO: Pod "downwardapi-volume-932d3331-833d-44fa-bb85-c7eed35aabbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109212298s
Oct 25 17:34:21.116: INFO: Pod "downwardapi-volume-932d3331-833d-44fa-bb85-c7eed35aabbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.289543024s
STEP: Saw pod success
Oct 25 17:34:21.116: INFO: Pod "downwardapi-volume-932d3331-833d-44fa-bb85-c7eed35aabbc" satisfied condition "Succeeded or Failed"
Oct 25 17:34:21.152: INFO: Trying to get logs from node bootstrap-e2e-minion-group-jzdr pod downwardapi-volume-932d3331-833d-44fa-bb85-c7eed35aabbc container client-container: <nil>
STEP: delete the pod
Oct 25 17:34:21.303: INFO: Waiting for pod downwardapi-volume-932d3331-833d-44fa-bb85-c7eed35aabbc to disappear
Oct 25 17:34:21.341: INFO: Pod downwardapi-volume-932d3331-833d-44fa-bb85-c7eed35aabbc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:34:21.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7025" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":243,"skipped":3991,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-ff4ce71f-c535-41ad-a4f3-b888888d387b
STEP: Creating a pod to test consume configMaps
Oct 25 17:34:21.718: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3cbab8b1-a257-4922-a0ae-ff4da79c6a22" in namespace "projected-7586" to be "Succeeded or Failed"
Oct 25 17:34:21.759: INFO: Pod "pod-projected-configmaps-3cbab8b1-a257-4922-a0ae-ff4da79c6a22": Phase="Pending", Reason="", readiness=false. Elapsed: 40.564135ms
Oct 25 17:34:23.796: INFO: Pod "pod-projected-configmaps-3cbab8b1-a257-4922-a0ae-ff4da79c6a22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.077973962s
STEP: Saw pod success
Oct 25 17:34:23.797: INFO: Pod "pod-projected-configmaps-3cbab8b1-a257-4922-a0ae-ff4da79c6a22" satisfied condition "Succeeded or Failed"
Oct 25 17:34:23.834: INFO: Trying to get logs from node bootstrap-e2e-minion-group-jzdr pod pod-projected-configmaps-3cbab8b1-a257-4922-a0ae-ff4da79c6a22 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Oct 25 17:34:24.092: INFO: Waiting for pod pod-projected-configmaps-3cbab8b1-a257-4922-a0ae-ff4da79c6a22 to disappear
Oct 25 17:34:24.128: INFO: Pod pod-projected-configmaps-3cbab8b1-a257-4922-a0ae-ff4da79c6a22 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:34:24.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7586" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":306,"completed":244,"skipped":4025,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-c1360000-7468-463e-a277-e4a2d2a09a3f
STEP: Creating a pod to test consume configMaps
Oct 25 17:34:24.470: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6bf70742-5d11-461a-a507-f6f8887f9687" in namespace "projected-6086" to be "Succeeded or Failed"
Oct 25 17:34:24.510: INFO: Pod "pod-projected-configmaps-6bf70742-5d11-461a-a507-f6f8887f9687": Phase="Pending", Reason="", readiness=false. Elapsed: 40.464793ms
Oct 25 17:34:26.547: INFO: Pod "pod-projected-configmaps-6bf70742-5d11-461a-a507-f6f8887f9687": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.076852195s
STEP: Saw pod success
Oct 25 17:34:26.547: INFO: Pod "pod-projected-configmaps-6bf70742-5d11-461a-a507-f6f8887f9687" satisfied condition "Succeeded or Failed"
Oct 25 17:34:26.585: INFO: Trying to get logs from node bootstrap-e2e-minion-group-jzdr pod pod-projected-configmaps-6bf70742-5d11-461a-a507-f6f8887f9687 container agnhost-container: <nil>
STEP: delete the pod
Oct 25 17:34:26.678: INFO: Waiting for pod pod-projected-configmaps-6bf70742-5d11-461a-a507-f6f8887f9687 to disappear
Oct 25 17:34:26.717: INFO: Pod pod-projected-configmaps-6bf70742-5d11-461a-a507-f6f8887f9687 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:34:26.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6086" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":245,"skipped":4031,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 21 lines ...
Oct 25 17:34:51.534: INFO: The status of Pod test-webserver-2c66fb3b-3420-4e3d-bc83-d0fe7b30b524 is Running (Ready = true)
Oct 25 17:34:51.571: INFO: Container started at 2020-10-25 17:34:28 +0000 UTC, pod became ready at 2020-10-25 17:34:51 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:34:51.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5429" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":306,"completed":246,"skipped":4036,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 25 17:34:51.649: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct 25 17:34:51.976: INFO: Waiting up to 5m0s for pod "pod-e1867b51-2b53-44b6-a298-cc7caaf15af3" in namespace "emptydir-6336" to be "Succeeded or Failed"
Oct 25 17:34:52.012: INFO: Pod "pod-e1867b51-2b53-44b6-a298-cc7caaf15af3": Phase="Pending", Reason="", readiness=false. Elapsed: 36.484439ms
Oct 25 17:34:54.049: INFO: Pod "pod-e1867b51-2b53-44b6-a298-cc7caaf15af3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.07355776s
STEP: Saw pod success
Oct 25 17:34:54.049: INFO: Pod "pod-e1867b51-2b53-44b6-a298-cc7caaf15af3" satisfied condition "Succeeded or Failed"
Oct 25 17:34:54.086: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-e1867b51-2b53-44b6-a298-cc7caaf15af3 container test-container: <nil>
STEP: delete the pod
Oct 25 17:34:54.248: INFO: Waiting for pod pod-e1867b51-2b53-44b6-a298-cc7caaf15af3 to disappear
Oct 25 17:34:54.302: INFO: Pod pod-e1867b51-2b53-44b6-a298-cc7caaf15af3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:34:54.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6336" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":247,"skipped":4039,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 25 17:34:54.805: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8959141-01e1-40ab-a10e-307c1f14c5e5" in namespace "downward-api-7859" to be "Succeeded or Failed"
Oct 25 17:34:54.871: INFO: Pod "downwardapi-volume-e8959141-01e1-40ab-a10e-307c1f14c5e5": Phase="Pending", Reason="", readiness=false. Elapsed: 66.2178ms
Oct 25 17:34:56.908: INFO: Pod "downwardapi-volume-e8959141-01e1-40ab-a10e-307c1f14c5e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.103558529s
STEP: Saw pod success
Oct 25 17:34:56.908: INFO: Pod "downwardapi-volume-e8959141-01e1-40ab-a10e-307c1f14c5e5" satisfied condition "Succeeded or Failed"
Oct 25 17:34:56.946: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downwardapi-volume-e8959141-01e1-40ab-a10e-307c1f14c5e5 container client-container: <nil>
STEP: delete the pod
Oct 25 17:34:57.047: INFO: Waiting for pod downwardapi-volume-e8959141-01e1-40ab-a10e-307c1f14c5e5 to disappear
Oct 25 17:34:57.083: INFO: Pod downwardapi-volume-e8959141-01e1-40ab-a10e-307c1f14c5e5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:34:57.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7859" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":248,"skipped":4063,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}

------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Oct 25 17:34:57.640: INFO: stderr: ""
Oct 25 17:34:57.640: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\nbatch/v2alpha1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncloud.google.com/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nmetrics.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1alpha1\nscheduling.k8s.io/v1beta1\nsnapshot.storage.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:34:57.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6594" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":306,"completed":249,"skipped":4063,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Networking
... skipping 40 lines ...
Oct 25 17:35:22.079: INFO: reached 10.64.3.57 after 0/1 tries
Oct 25 17:35:22.079: INFO: Going to retry 0 out of 3 pods....
[AfterEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:35:22.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1459" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":306,"completed":250,"skipped":4076,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl diff 
  should check if kubectl diff finds a difference for Deployments [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 17 lines ...
Oct 25 17:35:24.411: INFO: stderr: ""
Oct 25 17:35:24.411: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:35:24.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3333" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":306,"completed":251,"skipped":4119,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should be possible to delete [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:35:25.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4521" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":306,"completed":252,"skipped":4128,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-22e50a7b-9157-4cdc-bd03-5d7aea1f3b08
STEP: Creating a pod to test consume secrets
Oct 25 17:35:25.487: INFO: Waiting up to 5m0s for pod "pod-secrets-23dbcc22-2bff-44a4-8d47-0dfdc8d013a4" in namespace "secrets-3640" to be "Succeeded or Failed"
Oct 25 17:35:25.523: INFO: Pod "pod-secrets-23dbcc22-2bff-44a4-8d47-0dfdc8d013a4": Phase="Pending", Reason="", readiness=false. Elapsed: 35.834444ms
Oct 25 17:35:27.569: INFO: Pod "pod-secrets-23dbcc22-2bff-44a4-8d47-0dfdc8d013a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.08192572s
STEP: Saw pod success
Oct 25 17:35:27.569: INFO: Pod "pod-secrets-23dbcc22-2bff-44a4-8d47-0dfdc8d013a4" satisfied condition "Succeeded or Failed"
Oct 25 17:35:27.608: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-secrets-23dbcc22-2bff-44a4-8d47-0dfdc8d013a4 container secret-env-test: <nil>
STEP: delete the pod
Oct 25 17:35:27.717: INFO: Waiting for pod pod-secrets-23dbcc22-2bff-44a4-8d47-0dfdc8d013a4 to disappear
Oct 25 17:35:27.759: INFO: Pod pod-secrets-23dbcc22-2bff-44a4-8d47-0dfdc8d013a4 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:35:27.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3640" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":306,"completed":253,"skipped":4158,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 64 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:35:35.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5973" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":306,"completed":254,"skipped":4159,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 25 17:35:35.157: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: creating the pod with failed condition
STEP: updating the pod
Oct 25 17:37:36.131: INFO: Successfully updated pod "var-expansion-ed4160db-2469-4d6e-ac70-93e6b2271020"
STEP: waiting for pod running
STEP: deleting the pod gracefully
Oct 25 17:37:38.240: INFO: Deleting pod "var-expansion-ed4160db-2469-4d6e-ac70-93e6b2271020" in namespace "var-expansion-5304"
Oct 25 17:37:38.286: INFO: Wait up to 5m0s for pod "var-expansion-ed4160db-2469-4d6e-ac70-93e6b2271020" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:38:22.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5304" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":306,"completed":255,"skipped":4163,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 17 lines ...
STEP: creating replication controller affinity-nodeport-timeout in namespace services-1654
I1025 17:38:26.349040  144261 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-1654, replica count: 3
I1025 17:38:29.399768  144261 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 25 17:38:29.511: INFO: Creating new exec pod
Oct 25 17:38:32.800: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-1654 exec execpod-affinityrwt48 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80'
Oct 25 17:38:34.325: INFO: rc: 1
Oct 25 17:38:34.325: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-1654 exec execpod-affinityrwt48 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-nodeport-timeout 80
nc: connect to affinity-nodeport-timeout port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 25 17:38:35.325: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-1654 exec execpod-affinityrwt48 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80'
Oct 25 17:38:36.820: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n"
Oct 25 17:38:36.820: INFO: stdout: ""
Oct 25 17:38:36.821: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=services-1654 exec execpod-affinityrwt48 -- /bin/sh -x -c nc -zv -t -w 2 10.0.153.54 80'
... skipping 43 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:39:12.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1654" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":306,"completed":256,"skipped":4165,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 17 lines ...
Oct 25 17:41:34.514: INFO: Restart count of pod container-probe-6865/liveness-6a046d21-5090-48d3-a22a-adff0a20b3f1 is now 5 (2m19.306447934s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:41:34.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6865" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":306,"completed":257,"skipped":4185,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] version v1
... skipping 339 lines ...
Oct 25 17:41:43.346: INFO: Deleting ReplicationController proxy-service-cf47r took: 54.898164ms
Oct 25 17:41:44.046: INFO: Terminating ReplicationController proxy-service-cf47r pods took: 700.286706ms
[AfterEach] version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:41:51.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5846" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":306,"completed":258,"skipped":4216,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 78 lines ...
Type = [Normal], Name = [filler-pod-563c8701-ea6a-4399-85e5-9accf2eb71dc.16414d190716fda0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9642/filler-pod-563c8701-ea6a-4399-85e5-9accf2eb71dc to bootstrap-e2e-minion-group-nmms]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3ea731b9-b6ea-4e50-a9cb-0a836c4bd639.16414d1904faa867], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9642/filler-pod-3ea731b9-b6ea-4e50-a9cb-0a836c4bd639 to bootstrap-e2e-minion-group-jzdr]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-563c8701-ea6a-4399-85e5-9accf2eb71dc.16414d19344fdf4d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Warning], Name = [filler-pod-3ea731b9-b6ea-4e50-a9cb-0a836c4bd639.16414d194f330239], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "default-token-pc4xs" : failed to sync secret cache: timed out waiting for the condition]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3ea731b9-b6ea-4e50-a9cb-0a836c4bd639.16414d199b2c7a33], Reason = [Started], Message = [Started container filler-pod-3ea731b9-b6ea-4e50-a9cb-0a836c4bd639]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-563c8701-ea6a-4399-85e5-9accf2eb71dc.16414d193c260cc5], Reason = [Started], Message = [Started container filler-pod-563c8701-ea6a-4399-85e5-9accf2eb71dc]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b1896dcb-1952-4efc-b9a1-eef2d29964cc.16414d1941a7f409], Reason = [Started], Message = [Started container filler-pod-b1896dcb-1952-4efc-b9a1-eef2d29964cc]
... skipping 8 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:41:58.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9642" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":306,"completed":259,"skipped":4227,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:42:06.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5605" for this suite.
STEP: Destroying namespace "webhook-5605-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":306,"completed":260,"skipped":4246,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSS
------------------------------
[sig-network] Ingress API 
  should support creating Ingress API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Ingress API
... skipping 25 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] Ingress API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:42:10.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-8329" for this suite.
•{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":306,"completed":261,"skipped":4250,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-map-15863c1d-732a-4bc4-a5af-e0928bcb3f0b
STEP: Creating a pod to test consume configMaps
Oct 25 17:42:11.288: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2016abde-7e0d-475d-8713-f77f6ee58fc8" in namespace "projected-9967" to be "Succeeded or Failed"
Oct 25 17:42:11.359: INFO: Pod "pod-projected-configmaps-2016abde-7e0d-475d-8713-f77f6ee58fc8": Phase="Pending", Reason="", readiness=false. Elapsed: 70.6648ms
Oct 25 17:42:13.501: INFO: Pod "pod-projected-configmaps-2016abde-7e0d-475d-8713-f77f6ee58fc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.212515932s
STEP: Saw pod success
Oct 25 17:42:13.501: INFO: Pod "pod-projected-configmaps-2016abde-7e0d-475d-8713-f77f6ee58fc8" satisfied condition "Succeeded or Failed"
Oct 25 17:42:13.537: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-projected-configmaps-2016abde-7e0d-475d-8713-f77f6ee58fc8 container agnhost-container: <nil>
STEP: delete the pod
Oct 25 17:42:13.645: INFO: Waiting for pod pod-projected-configmaps-2016abde-7e0d-475d-8713-f77f6ee58fc8 to disappear
Oct 25 17:42:13.682: INFO: Pod pod-projected-configmaps-2016abde-7e0d-475d-8713-f77f6ee58fc8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:42:13.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9967" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":306,"completed":262,"skipped":4269,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 25 17:42:13.758: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 25 17:42:16.323: INFO: Deleting pod "var-expansion-b5e9dd17-c513-456b-ac2f-b21e748fd9a8" in namespace "var-expansion-3711"
Oct 25 17:42:16.365: INFO: Wait up to 5m0s for pod "var-expansion-b5e9dd17-c513-456b-ac2f-b21e748fd9a8" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:43:22.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3711" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":306,"completed":263,"skipped":4277,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:43:25.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6901" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":264,"skipped":4282,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:43:36.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3608" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":306,"completed":265,"skipped":4290,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:43:41.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5669" for this suite.
STEP: Destroying namespace "webhook-5669-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":306,"completed":266,"skipped":4337,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-node] PodTemplates 
  should run the lifecycle of PodTemplates [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] PodTemplates
... skipping 5 lines ...
[It] should run the lifecycle of PodTemplates [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [sig-node] PodTemplates
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:43:42.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-1855" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":306,"completed":267,"skipped":4348,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}

------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 25 17:43:43.035: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7717f9ca-ddff-43f6-a607-68e3416aed4f" in namespace "projected-8174" to be "Succeeded or Failed"
Oct 25 17:43:43.085: INFO: Pod "downwardapi-volume-7717f9ca-ddff-43f6-a607-68e3416aed4f": Phase="Pending", Reason="", readiness=false. Elapsed: 49.459524ms
Oct 25 17:43:45.131: INFO: Pod "downwardapi-volume-7717f9ca-ddff-43f6-a607-68e3416aed4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.09533433s
STEP: Saw pod success
Oct 25 17:43:45.131: INFO: Pod "downwardapi-volume-7717f9ca-ddff-43f6-a607-68e3416aed4f" satisfied condition "Succeeded or Failed"
Oct 25 17:43:45.175: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod downwardapi-volume-7717f9ca-ddff-43f6-a607-68e3416aed4f container client-container: <nil>
STEP: delete the pod
Oct 25 17:43:45.298: INFO: Waiting for pod downwardapi-volume-7717f9ca-ddff-43f6-a607-68e3416aed4f to disappear
Oct 25 17:43:45.338: INFO: Pod downwardapi-volume-7717f9ca-ddff-43f6-a607-68e3416aed4f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:43:45.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8174" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":306,"completed":268,"skipped":4348,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}

------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Deployment
... skipping 25 lines ...
Oct 25 17:43:48.648: INFO: Pod "test-recreate-deployment-f79dd4667-2dxmd" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-2dxmd test-recreate-deployment-f79dd4667- deployment-3202  c2c3b9d7-6284-4b7f-b96e-ab5d44eb1015 22112 0 2020-10-25 17:43:48 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 731a4ff1-7a5e-4d38-8abf-732a140314d0 0xc004a41ed0 0xc004a41ed1}] []  [{kube-controller-manager Update v1 2020-10-25 17:43:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"731a4ff1-7a5e-4d38-8abf-732a140314d0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-25 17:43:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dqmqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dqmqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dqmqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-05w9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 17:43:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 17:43:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 17:43:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-25 17:43:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.5,PodIP:,StartTime:2020-10-25 17:43:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:43:48.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3202" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":306,"completed":269,"skipped":4348,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}

------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Watchers
... skipping 26 lines ...
Oct 25 17:44:39.577: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3790  43881ad0-9f10-4da0-bbb8-4fb1d15a60f6 22251 0 2020-10-25 17:44:29 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-10-25 17:44:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Oct 25 17:44:39.577: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3790  43881ad0-9f10-4da0-bbb8-4fb1d15a60f6 22251 0 2020-10-25 17:44:29 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-10-25 17:44:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:44:49.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3790" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":306,"completed":270,"skipped":4348,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
... skipping 19 lines ...
Oct 25 17:44:53.257: INFO: Deleting pod "var-expansion-a542674f-b5d0-42e7-99ed-5711c2469b47" in namespace "var-expansion-8178"
Oct 25 17:44:53.297: INFO: Wait up to 5m0s for pod "var-expansion-a542674f-b5d0-42e7-99ed-5711c2469b47" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:45:31.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8178" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":306,"completed":271,"skipped":4366,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:45:39.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-203" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":306,"completed":272,"skipped":4416,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:46:01.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1699" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":306,"completed":273,"skipped":4424,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:46:06.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5290" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":306,"completed":274,"skipped":4436,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 7 lines ...
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:46:06.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-1445" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":306,"completed":275,"skipped":4442,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 28 lines ...
Oct 25 17:46:20.977: INFO: stderr: ""
Oct 25 17:46:20.977: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:46:20.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-726" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":306,"completed":276,"skipped":4449,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 8 lines ...
Oct 25 17:46:21.713: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a9ac27ce-1b41-4c27-8b34-b8edc3c07645", Controller:(*bool)(0xc004fd2416), BlockOwnerDeletion:(*bool)(0xc004fd2417)}}
Oct 25 17:46:21.753: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c9cd0498-7f39-4a6b-adf5-b5b8a4da796f", Controller:(*bool)(0xc004fd2636), BlockOwnerDeletion:(*bool)(0xc004fd2637)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:46:26.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3947" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":306,"completed":277,"skipped":4464,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-map-669009ff-5731-4ba4-b124-c88af4e14466
STEP: Creating a pod to test consume secrets
Oct 25 17:46:27.213: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4b7eaef6-7c90-4599-94ff-d9b35d8e8c5e" in namespace "projected-7927" to be "Succeeded or Failed"
Oct 25 17:46:27.259: INFO: Pod "pod-projected-secrets-4b7eaef6-7c90-4599-94ff-d9b35d8e8c5e": Phase="Pending", Reason="", readiness=false. Elapsed: 45.217604ms
Oct 25 17:46:29.325: INFO: Pod "pod-projected-secrets-4b7eaef6-7c90-4599-94ff-d9b35d8e8c5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.111295632s
STEP: Saw pod success
Oct 25 17:46:29.325: INFO: Pod "pod-projected-secrets-4b7eaef6-7c90-4599-94ff-d9b35d8e8c5e" satisfied condition "Succeeded or Failed"
Oct 25 17:46:29.398: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-projected-secrets-4b7eaef6-7c90-4599-94ff-d9b35d8e8c5e container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 25 17:46:29.626: INFO: Waiting for pod pod-projected-secrets-4b7eaef6-7c90-4599-94ff-d9b35d8e8c5e to disappear
Oct 25 17:46:29.672: INFO: Pod pod-projected-secrets-4b7eaef6-7c90-4599-94ff-d9b35d8e8c5e no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:46:29.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7927" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":306,"completed":278,"skipped":4467,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Oct 25 17:46:32.677: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:32.720: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:33.002: INFO: Unable to read jessie_udp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:33.040: INFO: Unable to read jessie_tcp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:33.080: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:33.118: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:33.379: INFO: Lookups using dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb failed for: [wheezy_udp@dns-test-service.dns-3750.svc.cluster.local wheezy_tcp@dns-test-service.dns-3750.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local jessie_udp@dns-test-service.dns-3750.svc.cluster.local jessie_tcp@dns-test-service.dns-3750.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local]

Oct 25 17:46:38.439: INFO: Unable to read wheezy_udp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:38.478: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:38.515: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:38.553: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:38.836: INFO: Unable to read jessie_udp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:38.876: INFO: Unable to read jessie_tcp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:38.914: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:38.953: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:39.202: INFO: Lookups using dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb failed for: [wheezy_udp@dns-test-service.dns-3750.svc.cluster.local wheezy_tcp@dns-test-service.dns-3750.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local jessie_udp@dns-test-service.dns-3750.svc.cluster.local jessie_tcp@dns-test-service.dns-3750.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local]

Oct 25 17:46:43.420: INFO: Unable to read wheezy_udp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:43.461: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:43.505: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:43.552: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:43.885: INFO: Unable to read jessie_udp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:43.928: INFO: Unable to read jessie_tcp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:43.976: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:44.019: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:44.509: INFO: Lookups using dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb failed for: [wheezy_udp@dns-test-service.dns-3750.svc.cluster.local wheezy_tcp@dns-test-service.dns-3750.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local jessie_udp@dns-test-service.dns-3750.svc.cluster.local jessie_tcp@dns-test-service.dns-3750.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local]

Oct 25 17:46:48.460: INFO: Unable to read wheezy_udp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:48.560: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:48.694: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:48.753: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:49.185: INFO: Unable to read jessie_udp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:49.223: INFO: Unable to read jessie_tcp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:49.263: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:49.301: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:49.534: INFO: Lookups using dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb failed for: [wheezy_udp@dns-test-service.dns-3750.svc.cluster.local wheezy_tcp@dns-test-service.dns-3750.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local jessie_udp@dns-test-service.dns-3750.svc.cluster.local jessie_tcp@dns-test-service.dns-3750.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local]

Oct 25 17:46:53.417: INFO: Unable to read wheezy_udp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:53.454: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:53.491: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:53.530: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:53.803: INFO: Unable to read jessie_udp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:53.846: INFO: Unable to read jessie_tcp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:53.887: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:53.926: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:54.160: INFO: Lookups using dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb failed for: [wheezy_udp@dns-test-service.dns-3750.svc.cluster.local wheezy_tcp@dns-test-service.dns-3750.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local jessie_udp@dns-test-service.dns-3750.svc.cluster.local jessie_tcp@dns-test-service.dns-3750.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local]

Oct 25 17:46:58.431: INFO: Unable to read wheezy_udp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:58.469: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:58.508: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:58.545: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:58.825: INFO: Unable to read jessie_udp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:58.862: INFO: Unable to read jessie_tcp@dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:58.899: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:58.937: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local from pod dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb: the server could not find the requested resource (get pods dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb)
Oct 25 17:46:59.170: INFO: Lookups using dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb failed for: [wheezy_udp@dns-test-service.dns-3750.svc.cluster.local wheezy_tcp@dns-test-service.dns-3750.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local jessie_udp@dns-test-service.dns-3750.svc.cluster.local jessie_tcp@dns-test-service.dns-3750.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3750.svc.cluster.local]

Oct 25 17:47:04.212: INFO: DNS probes using dns-3750/dns-test-328b59f5-e5b5-4e87-8c7d-d2cdd7baf8fb succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:47:04.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3750" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":306,"completed":279,"skipped":4478,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Lease
... skipping 5 lines ...
[It] lease API should be available [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Lease
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:47:05.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-1775" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":306,"completed":280,"skipped":4480,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-map-ec5b22df-cec8-41b9-b466-af8eef8f25a8
STEP: Creating a pod to test consume configMaps
Oct 25 17:47:05.898: INFO: Waiting up to 5m0s for pod "pod-configmaps-063ab30b-e8da-4c15-9861-df24b1873884" in namespace "configmap-9912" to be "Succeeded or Failed"
Oct 25 17:47:05.934: INFO: Pod "pod-configmaps-063ab30b-e8da-4c15-9861-df24b1873884": Phase="Pending", Reason="", readiness=false. Elapsed: 35.879094ms
Oct 25 17:47:07.971: INFO: Pod "pod-configmaps-063ab30b-e8da-4c15-9861-df24b1873884": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.072983686s
STEP: Saw pod success
Oct 25 17:47:07.971: INFO: Pod "pod-configmaps-063ab30b-e8da-4c15-9861-df24b1873884" satisfied condition "Succeeded or Failed"
Oct 25 17:47:08.008: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-configmaps-063ab30b-e8da-4c15-9861-df24b1873884 container configmap-volume-test: <nil>
STEP: delete the pod
Oct 25 17:47:08.106: INFO: Waiting for pod pod-configmaps-063ab30b-e8da-4c15-9861-df24b1873884 to disappear
Oct 25 17:47:08.142: INFO: Pod pod-configmaps-063ab30b-e8da-4c15-9861-df24b1873884 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:47:08.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9912" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":281,"skipped":4525,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:47:19.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2400" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":306,"completed":282,"skipped":4539,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}

------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 39 lines ...
Oct 25 17:47:28.170: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=crd-publish-openapi-7626 explain e2e-test-crd-publish-openapi-720-crds.spec'
Oct 25 17:47:28.489: INFO: stderr: ""
Oct 25 17:47:28.489: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-720-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Oct 25 17:47:28.490: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=crd-publish-openapi-7626 explain e2e-test-crd-publish-openapi-720-crds.spec.bars'
Oct 25 17:47:28.803: INFO: stderr: ""
Oct 25 17:47:28.803: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-720-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Oct 25 17:47:28.804: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.44.183 --kubeconfig=/workspace/.kube/config --namespace=crd-publish-openapi-7626 explain e2e-test-crd-publish-openapi-720-crds.spec.bars2'
Oct 25 17:47:29.122: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:47:36.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7626" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":306,"completed":283,"skipped":4539,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 9 lines ...
STEP: creating pod
Oct 25 17:47:38.640: INFO: Pod pod-hostip-48bb5cd9-58a2-415b-a04d-ed41c4509bb7 has hostIP: 10.138.0.5
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:47:38.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1487" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":306,"completed":284,"skipped":4549,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] PodTemplates 
  should delete a collection of pod templates [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] PodTemplates
... skipping 14 lines ...
STEP: check that the list of pod templates matches the requested quantity
Oct 25 17:47:39.276: INFO: requesting list of pod templates to confirm quantity
[AfterEach] [sig-node] PodTemplates
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:47:39.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-1700" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":306,"completed":285,"skipped":4564,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Secrets
... skipping 10 lines ...
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:47:40.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6200" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":306,"completed":286,"skipped":4567,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SS
------------------------------
[sig-api-machinery] server version 
  should find the server version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] server version
... skipping 11 lines ...
Oct 25 17:47:41.471: INFO: cleanMinorVersion: 20
Oct 25 17:47:41.471: INFO: Minor version: 20+
[AfterEach] [sig-api-machinery] server version
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:47:41.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-8353" for this suite.
•{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":306,"completed":287,"skipped":4569,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:47:48.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7240" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":306,"completed":288,"skipped":4606,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-map-366c9a7e-4fb6-4b9a-93e1-e2db50383410
STEP: Creating a pod to test consume configMaps
Oct 25 17:47:48.625: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5ca32710-db75-4596-bf90-004f4b52ba6e" in namespace "projected-2373" to be "Succeeded or Failed"
Oct 25 17:47:48.795: INFO: Pod "pod-projected-configmaps-5ca32710-db75-4596-bf90-004f4b52ba6e": Phase="Pending", Reason="", readiness=false. Elapsed: 169.51623ms
Oct 25 17:47:50.838: INFO: Pod "pod-projected-configmaps-5ca32710-db75-4596-bf90-004f4b52ba6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.21253629s
STEP: Saw pod success
Oct 25 17:47:50.838: INFO: Pod "pod-projected-configmaps-5ca32710-db75-4596-bf90-004f4b52ba6e" satisfied condition "Succeeded or Failed"
Oct 25 17:47:50.874: INFO: Trying to get logs from node bootstrap-e2e-minion-group-05w9 pod pod-projected-configmaps-5ca32710-db75-4596-bf90-004f4b52ba6e container agnhost-container: <nil>
STEP: delete the pod
Oct 25 17:47:51.005: INFO: Waiting for pod pod-projected-configmaps-5ca32710-db75-4596-bf90-004f4b52ba6e to disappear
Oct 25 17:47:51.042: INFO: Pod pod-projected-configmaps-5ca32710-db75-4596-bf90-004f4b52ba6e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:47:51.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2373" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":289,"skipped":4627,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] StatefulSet
... skipping 58 lines ...
Oct 25 17:50:25.867: INFO: Waiting for statefulset status.replicas updated to 0
Oct 25 17:50:25.903: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:50:26.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6955" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":306,"completed":290,"skipped":4631,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 16 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:50:57.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-33" for this suite.
STEP: Destroying namespace "nsdeletetest-8864" for this suite.
Oct 25 17:50:58.014: INFO: Namespace nsdeletetest-8864 was already deleted
STEP: Destroying namespace "nsdeletetest-6886" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":306,"completed":291,"skipped":4649,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Pods 
  should delete a collection of pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 13 lines ...
STEP: waiting for all 3 pods to be located
STEP: waiting for all pods to be deleted
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:50:58.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6255" for this suite.
•{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":306,"completed":292,"skipped":4654,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 21 lines ...
Oct 25 17:51:11.661: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:51:11.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9757" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":306,"completed":293,"skipped":4701,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 25 17:51:11.795: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: creating the pod
Oct 25 17:51:11.986: INFO: PodSpec: initContainers in spec.initContainers
Oct 25 17:52:00.054: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a035ce85-3b5c-4384-bba6-7485b48b4a00", GenerateName:"", Namespace:"init-container-9021", SelfLink:"", UID:"fd908350-fc12-48a7-b958-0d70708e9f6c", ResourceVersion:"23916", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63739245072, loc:(*time.Location)(0x77697a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"986391247"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001d741c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001d741e0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001d74200), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001d74220)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-dqw82", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc006b2ed40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dqw82", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dqw82", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dqw82", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0057642e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"bootstrap-e2e-minion-group-05w9", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0020f2230), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005764370)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005764390)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc005764398), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00576439c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0057a2060), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739245072, loc:(*time.Location)(0x77697a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739245072, loc:(*time.Location)(0x77697a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739245072, loc:(*time.Location)(0x77697a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739245072, loc:(*time.Location)(0x77697a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.5", PodIP:"10.64.2.41", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.64.2.41"}}, StartTime:(*v1.Time)(0xc001d74240), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0020f2310)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0020f2380)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://ff6d56fa25ef23881afc548354fa2a80075cde951d3f589c271b74f595580f6a", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001d74280), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001d74260), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00576441f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:52:00.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9021" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":306,"completed":294,"skipped":4702,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should run through the lifecycle of a ServiceAccount [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 10 lines ...
STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector)
STEP: deleting the ServiceAccount
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:52:00.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9406" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":306,"completed":295,"skipped":4715,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 34 lines ...
Oct 25 17:52:07.016: INFO: stdout: "service/rm3 exposed\n"
Oct 25 17:52:07.146: INFO: Service rm3 in namespace kubectl-2648 found.
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:52:09.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2648" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":306,"completed":296,"skipped":4719,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 67 lines ...
Oct 25 17:52:32.755: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"24095"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:52:32.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-991" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":306,"completed":297,"skipped":4740,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 54 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:52:42.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3916" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":306,"completed":298,"skipped":4742,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicaSet
... skipping 12 lines ...
Oct 25 17:52:48.902: INFO: Trying to dial the pod
Oct 25 17:52:54.015: INFO: Controller my-hostname-basic-990633b5-ba50-4a44-a948-5835445d142f: Got expected result from replica 1 [my-hostname-basic-990633b5-ba50-4a44-a948-5835445d142f-tdh4w]: "my-hostname-basic-990633b5-ba50-4a44-a948-5835445d142f-tdh4w", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:52:54.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6516" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":306,"completed":299,"skipped":4756,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 25 17:52:54.412: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-6af0d38f-1e48-4924-8547-48885cfde087" in namespace "security-context-test-4072" to be "Succeeded or Failed"
Oct 25 17:52:54.462: INFO: Pod "busybox-readonly-false-6af0d38f-1e48-4924-8547-48885cfde087": Phase="Pending", Reason="", readiness=false. Elapsed: 49.277603ms
Oct 25 17:52:56.515: INFO: Pod "busybox-readonly-false-6af0d38f-1e48-4924-8547-48885cfde087": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.102505525s
Oct 25 17:52:56.515: INFO: Pod "busybox-readonly-false-6af0d38f-1e48-4924-8547-48885cfde087" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:52:56.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4072" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":306,"completed":300,"skipped":4792,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Oct 25 17:52:56.632: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test override arguments
Oct 25 17:52:57.042: INFO: Waiting up to 5m0s for pod "client-containers-081a3616-e800-4a73-9624-2386c03787a3" in namespace "containers-3150" to be "Succeeded or Failed"
Oct 25 17:52:57.159: INFO: Pod "client-containers-081a3616-e800-4a73-9624-2386c03787a3": Phase="Pending", Reason="", readiness=false. Elapsed: 116.808776ms
Oct 25 17:52:59.197: INFO: Pod "client-containers-081a3616-e800-4a73-9624-2386c03787a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.154300201s
STEP: Saw pod success
Oct 25 17:52:59.197: INFO: Pod "client-containers-081a3616-e800-4a73-9624-2386c03787a3" satisfied condition "Succeeded or Failed"
Oct 25 17:52:59.233: INFO: Trying to get logs from node bootstrap-e2e-minion-group-jzdr pod client-containers-081a3616-e800-4a73-9624-2386c03787a3 container agnhost-container: <nil>
STEP: delete the pod
Oct 25 17:52:59.354: INFO: Waiting for pod client-containers-081a3616-e800-4a73-9624-2386c03787a3 to disappear
Oct 25 17:52:59.391: INFO: Pod client-containers-081a3616-e800-4a73-9624-2386c03787a3 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:52:59.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3150" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":306,"completed":301,"skipped":4803,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 57 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:53:23.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5281" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":306,"completed":302,"skipped":4817,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] StatefulSet
... skipping 22 lines ...
Oct 25 17:54:34.280: INFO: Waiting for statefulset status.replicas updated to 0
Oct 25 17:54:34.321: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:54:34.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6998" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":306,"completed":303,"skipped":4849,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Docker Containers
... skipping 5 lines ...
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:54:36.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1241" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":306,"completed":304,"skipped":4895,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Oct 25 17:54:44.277: INFO: stderr: ""
Oct 25 17:54:44.277: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3058-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t<string>\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t<string>\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t<Object>\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t<map[string]>\n     Specification of Waldo\n\n   status\t<Object>\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 25 17:54:49.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7871" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":306,"completed":305,"skipped":4917,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSOct 25 17:54:49.899: INFO: Running AfterSuite actions on all nodes
Oct 25 17:54:49.900: INFO: Running AfterSuite actions on node 1
Oct 25 17:54:49.900: INFO: Skipping dumping logs from cluster

JUnit report was created: /logs/artifacts/after/junit_01.xml
{"msg":"Test Suite completed","total":306,"completed":305,"skipped":4923,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-cli] Kubectl client Kubectl server-side dry-run [It] should check if kubectl can dry-run update Pods [Conformance] 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:598

Ran 306 of 5229 Specs in 6178.642 seconds
FAIL! -- 305 Passed | 1 Failed | 0 Pending | 4923 Skipped
--- FAIL: TestE2E (6178.69s)
FAIL

Ginkgo ran 1 suite in 1h43m0.185374419s
Test Suite Failed
2020/10/25 17:54:49 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.focus=\[Conformance\] --report-dir=/logs/artifacts/after --disable-log-dump=true' finished in 1h43m1.41214328s
2020/10/25 17:54:49 e2e.go:544: Dumping logs locally to: /logs/artifacts/after
2020/10/25 17:54:49 process.go:153: Running: ./cluster/log-dump/log-dump.sh /logs/artifacts/after
Checking for custom logdump instances, if any
Sourcing kube-util.sh
Detecting project
... skipping 2 lines ...
Zone: us-west1-b
Dumping logs from master locally to '/logs/artifacts/after'
Trying to find master named 'bootstrap-e2e-master'
Looking for address 'bootstrap-e2e-master-ip'
Using master: bootstrap-e2e-master (external IP: 35.247.44.183; internal IP: (not set))
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Changing logfiles to be world-readable for download
Copying 'kube-apiserver.log kube-apiserver-audit.log kube-scheduler.log kube-controller-manager.log etcd.log etcd-events.log glbc.log cluster-autoscaler.log kube-addon-manager.log konnectivity-server.log fluentd.log kubelet.cov startupscript.log' from bootstrap-e2e-master

Specify --start=57057 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/konnectivity-server.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov.tmp: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts/after'
Detecting nodes in the cluster
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Error: No such container: 
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-05w9
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-jzdr
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-nmms

Specify --start=110085 in the next get-serial-port-output invocation to get only the new output starting from here.
... skipping 5 lines ...
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=bootstrap-e2e-minion-group
NODE_NAMES=bootstrap-e2e-minion-group-05w9 bootstrap-e2e-minion-group-jzdr bootstrap-e2e-minion-group-nmms
Failures for bootstrap-e2e-minion-group (if any):
2020/10/25 17:56:55 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts/after' finished in 2m5.627534605s
2020/10/25 17:56:55 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
Project: k8s-gci-gce-ingress1-5
... skipping 40 lines ...
Property "users.k8s-gci-gce-ingress1-5_bootstrap-e2e-basic-auth" unset.
Property "contexts.k8s-gci-gce-ingress1-5_bootstrap-e2e" unset.
Cleared config for k8s-gci-gce-ingress1-5_bootstrap-e2e from /workspace/.kube/config
Done
2020/10/25 18:03:07 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 6m11.437183799s
2020/10/25 18:03:07 process.go:96: Saved XML output to /logs/artifacts/after/junit_runner.xml.
2020/10/25 18:03:07 main.go:316: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Conformance\] --report-dir=/logs/artifacts/after --disable-log-dump=true: exit status 1]
Traceback (most recent call last):
  File "../test-infra/scenarios/kubernetes_e2e.py", line 720, in <module>
    main(parse_args())
  File "../test-infra/scenarios/kubernetes_e2e.py", line 570, in main
    mode.start(runner_args)
  File "../test-infra/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 16 lines ...