This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-10-24 09:15
Elapsed2h27m
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 609 lines ...
Trying to find master named 'bootstrap-e2e-master'
Looking for address 'bootstrap-e2e-master-ip'
Using master: bootstrap-e2e-master (external IP: 34.105.36.219; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

........................Kubernetes cluster created.
Cluster "gce-up-c1-4-glat-up-clu_bootstrap-e2e" set.
User "gce-up-c1-4-glat-up-clu_bootstrap-e2e" set.
Context "gce-up-c1-4-glat-up-clu_bootstrap-e2e" created.
Switched to context "gce-up-c1-4-glat-up-clu_bootstrap-e2e".
... skipping 23 lines ...
bootstrap-e2e-minion-group-bf58   Ready                      <none>   11s   v1.20.0-alpha.3.114+5935fcd704fe89
bootstrap-e2e-minion-group-g27b   Ready                      <none>   12s   v1.20.0-alpha.3.114+5935fcd704fe89
bootstrap-e2e-minion-group-vkx8   Ready                      <none>   13s   v1.20.0-alpha.3.114+5935fcd704fe89
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
Cluster validation succeeded
Done, listing cluster services:
... skipping 70 lines ...
Zone: us-west1-b
Dumping logs from master locally to '/logs/artifacts/before'
Trying to find master named 'bootstrap-e2e-master'
Looking for address 'bootstrap-e2e-master-ip'
Using master: bootstrap-e2e-master (external IP: 34.105.36.219; internal IP: (not set))
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Changing logfiles to be world-readable for download
Copying 'kube-apiserver.log kube-apiserver-audit.log kube-scheduler.log kube-controller-manager.log etcd.log etcd-events.log glbc.log cluster-autoscaler.log kube-addon-manager.log konnectivity-server.log fluentd.log kubelet.cov startupscript.log' from bootstrap-e2e-master

Specify --start=56972 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/konnectivity-server.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts/before'
Detecting nodes in the cluster
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Changing logfiles to be world-readable for download
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Changing logfiles to be world-readable for download
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-vkx8
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-bf58
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-g27b

Specify --start=67847 in the next get-serial-port-output invocation to get only the new output starting from here.
... skipping 3 lines ...
Specify --start=68009 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=bootstrap-e2e-minion-group
NODE_NAMES=bootstrap-e2e-minion-group-bf58 bootstrap-e2e-minion-group-g27b bootstrap-e2e-minion-group-vkx8
Failures for bootstrap-e2e-minion-group (if any):
2020/10/24 09:43:51 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts/before' finished in 2m4.291293397s
2020/10/24 09:43:51 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh
Project: gce-up-c1-4-glat-up-clu
... skipping 14 lines ...
Using master: bootstrap-e2e-master (external IP: 34.105.36.219; internal IP: (not set))
Oct 24 09:43:54.891: INFO: Fetching cloud provider for "gce"
I1024 09:43:54.891932  143945 test_context.go:453] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I1024 09:43:54.892574  143945 gce.go:903] Using DefaultTokenSource &oauth2.reuseTokenSource{new:jwt.jwtSource{ctx:(*context.emptyCtx)(0xc00005a0b0), conf:(*jwt.Config)(0xc0021f0780)}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)}
W1024 09:43:54.960806  143945 gce.go:474] No network name or URL specified.
I1024 09:43:54.960983  143945 e2e.go:129] Starting e2e run "0513d69e-cea3-4743-9d3b-d041cd37ab4c" on Ginkgo node 1
{"msg":"Test Suite starting","total":306,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1603532633 - Will randomize all specs
Will run 306 of 5229 specs

Oct 24 09:44:00.011: INFO: cluster-master-image: cos-85-13310-1041-9
... skipping 45 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:44:14.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3516" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":306,"completed":1,"skipped":3,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
STEP: Updating configmap projected-configmap-test-upd-31935285-6d2b-4e3f-ac18-a1af699bf899
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:44:19.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7965" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":2,"skipped":29,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:44:19.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-173" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":306,"completed":3,"skipped":53,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-b58741eb-a6b4-4b10-bcfe-7d771cbe56f0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:45:51.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8972" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":4,"skipped":71,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Watchers
... skipping 13 lines ...
Oct 24 09:45:52.391: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4736  10576ada-61af-431e-b5dd-69a9534f1a2b 1410 0 2020-10-24 09:45:52 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-10-24 09:45:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Oct 24 09:45:52.391: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4736  10576ada-61af-431e-b5dd-69a9534f1a2b 1411 0 2020-10-24 09:45:52 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-10-24 09:45:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:45:52.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4736" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":306,"completed":5,"skipped":82,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Oct 24 09:45:55.725: INFO: Successfully updated pod "annotationupdate31323d8f-141a-4622-a799-554299bfcff3"
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:45:59.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1584" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":306,"completed":6,"skipped":124,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 35 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:46:16.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7606" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":306,"completed":7,"skipped":165,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 27 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:46:32.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4113" for this suite.
STEP: Destroying namespace "webhook-4113-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":306,"completed":8,"skipped":189,"failed":0}
SSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Watchers
... skipping 17 lines ...
Oct 24 09:46:33.302: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-2154  3f278b40-ab09-4699-acb7-a4a8e395dd95 1642 0 2020-10-24 09:46:32 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-10-24 09:46:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Oct 24 09:46:33.302: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-2154  3f278b40-ab09-4699-acb7-a4a8e395dd95 1643 0 2020-10-24 09:46:32 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-10-24 09:46:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:46:33.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2154" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":306,"completed":9,"skipped":192,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Service endpoints latency
... skipping 418 lines ...
Oct 24 09:46:47.245: INFO: 99 %ile: 1.852887368s
Oct 24 09:46:47.245: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:46:47.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-7680" for this suite.
•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":306,"completed":10,"skipped":201,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 8 lines ...
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:46:50.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-6922" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":306,"completed":11,"skipped":204,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 24 09:46:50.149: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename webhook
... skipping 13 lines ...
Oct 24 09:46:59.607: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
Oct 24 09:47:00.607: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
Oct 24 09:47:01.607: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
Oct 24 09:47:02.607: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
Oct 24 09:47:03.607: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
Oct 24 09:47:04.607: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:47:05.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1867" for this suite.
STEP: Destroying namespace "webhook-1867-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":306,"completed":12,"skipped":207,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 13 lines ...
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:47:23.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4981" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":306,"completed":13,"skipped":228,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 22 lines ...
Oct 24 09:47:27.280: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Oct 24 09:47:27.280: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2353 describe pod agnhost-primary-d7g4q'
Oct 24 09:47:27.574: INFO: stderr: ""
Oct 24 09:47:27.574: INFO: stdout: "Name:         agnhost-primary-d7g4q\nNamespace:    kubectl-2353\nPriority:     0\nNode:         bootstrap-e2e-minion-group-g27b/10.138.0.4\nStart Time:   Sat, 24 Oct 2020 09:47:24 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           10.64.2.10\nIPs:\n  IP:           10.64.2.10\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://3545f84ce9d542084c7f4a0c1c5dac8700e6eba74613b2c1658a684936a3361b\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.21\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 24 Oct 2020 09:47:26 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wqt58 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-wqt58:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-wqt58\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  3s    default-scheduler  Successfully assigned kubectl-2353/agnhost-primary-d7g4q to bootstrap-e2e-minion-group-g27b\n  Normal  Pulled     2s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n  Normal  Created    1s    kubelet            Created container agnhost-primary\n  Normal  Started    1s    kubelet            Started container agnhost-primary\n"
Oct 24 09:47:27.574: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2353 describe rc agnhost-primary'
Oct 24 09:47:27.926: INFO: stderr: ""
Oct 24 09:47:27.926: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-2353\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.21\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: agnhost-primary-d7g4q\n"
Oct 24 09:47:27.926: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2353 describe service agnhost-primary'
Oct 24 09:47:28.251: INFO: stderr: ""
Oct 24 09:47:28.251: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-2353\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP:                10.0.82.209\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.64.2.10:6379\nSession Affinity:  None\nEvents:            <none>\n"
Oct 24 09:47:28.302: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2353 describe node bootstrap-e2e-master'
Oct 24 09:47:28.885: INFO: stderr: ""
Oct 24 09:47:28.885: INFO: stdout: "Name:               bootstrap-e2e-master\nRoles:              <none>\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=n1-standard-1\n                    beta.kubernetes.io/os=linux\n                    cloud.google.com/metadata-proxy-ready=true\n                    failure-domain.beta.kubernetes.io/region=us-west1\n                    failure-domain.beta.kubernetes.io/zone=us-west1-b\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=bootstrap-e2e-master\n                    kubernetes.io/os=linux\n                    node.kubernetes.io/instance-type=n1-standard-1\n                    topology.kubernetes.io/region=us-west1\n                    topology.kubernetes.io/zone=us-west1-b\nAnnotations:        node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 24 Oct 2020 09:41:08 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\n                    node.kubernetes.io/unschedulable:NoSchedule\nUnschedulable:      true\nLease:\n  HolderIdentity:  bootstrap-e2e-master\n  AcquireTime:     <unset>\n  RenewTime:       Sat, 24 Oct 2020 09:47:20 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 24 Oct 2020 09:41:20 +0000   Sat, 24 Oct 2020 09:41:20 +0000   RouteCreated                 RouteController created a route\n  MemoryPressure       False   Sat, 24 Oct 2020 09:46:50 +0000   Sat, 24 Oct 2020 09:41:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sat, 24 Oct 2020 09:46:50 +0000   Sat, 24 Oct 2020 09:41:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sat, 24 Oct 2020 09:46:50 +0000   Sat, 24 Oct 2020 09:41:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sat, 24 Oct 2020 09:46:50 +0000   Sat, 24 Oct 2020 09:41:17 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   10.138.0.2\n  ExternalIP:   34.105.36.219\n  InternalDNS:  bootstrap-e2e-master.c.gce-up-c1-4-glat-up-clu.internal\n  Hostname:     bootstrap-e2e-master.c.gce-up-c1-4-glat-up-clu.internal\nCapacity:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          16293736Ki\n  hugepages-2Mi:              0\n  memory:                     3776180Ki\n  pods:                       110\nAllocatable:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          15016307073\n  hugepages-2Mi:              0\n  memory:                     3520180Ki\n  pods:                       110\nSystem Info:\n  Machine ID:                 e94e20813cf16284af0ca82ca7916a34\n  System UUID:                e94e2081-3cf1-6284-af0c-a82ca7916a34\n  Boot ID:                    11a814dc-deba-4a5a-8af4-b5db11f69dc3\n  Kernel Version:             5.4.49+\n  OS Image:                   Container-Optimized OS from Google\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.1\n  Kubelet Version:            v1.20.0-alpha.3.114+5935fcd704fe89\n  Kube-Proxy Version:         v1.20.0-alpha.3.114+5935fcd704fe89\nPodCIDR:                      10.64.0.0/24\nPodCIDRs:                     10.64.0.0/24\nProviderID:                   gce://gce-up-c1-4-glat-up-clu/us-west1-b/bootstrap-e2e-master\nNon-terminated Pods:          (8 in total)\n  Namespace                   Name                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                            ------------  ----------  ---------------  -------------  ---\n  kube-system                 etcd-server-bootstrap-e2e-master                200m (20%)    0 (0%)      0 (0%)           0 (0%)         5m49s\n  kube-system                 etcd-server-events-bootstrap-e2e-master         100m (10%)    0 (0%)      0 (0%)           0 (0%)         5m45s\n  kube-system                 kube-addon-manager-bootstrap-e2e-master         5m (0%)       0 (0%)      50Mi (1%)        0 (0%)         5m5s\n  kube-system                 kube-apiserver-bootstrap-e2e-master             250m (25%)    0 (0%)      0 (0%)           0 (0%)         5m37s\n  kube-system                 kube-controller-manager-bootstrap-e2e-master    200m (20%)    0 (0%)      0 (0%)           0 (0%)         5m57s\n  kube-system                 kube-scheduler-bootstrap-e2e-master             75m (7%)      0 (0%)      0 (0%)           0 (0%)         5m3s\n  kube-system                 l7-lb-controller-bootstrap-e2e-master           10m (1%)      0 (0%)      50Mi (1%)        0 (0%)         5m1s\n  kube-system                 metadata-proxy-v0.1-mpvsn                       32m (3%)      32m (3%)    45Mi (1%)        45Mi (1%)      6m19s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                   Requests    Limits\n  --------                   --------    ------\n  cpu                        872m (87%)  32m (3%)\n  memory                     145Mi (4%)  45Mi (1%)\n  ephemeral-storage          0 (0%)      0 (0%)\n  hugepages-2Mi              0 (0%)      0 (0%)\n  attachable-volumes-gce-pd  0           0\nEvents:                      <none>\n"
Oct 24 09:47:28.885: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2353 describe namespace kubectl-2353'
Oct 24 09:47:29.194: INFO: stderr: ""
Oct 24 09:47:29.194: INFO: stdout: "Name:         kubectl-2353\nLabels:       e2e-framework=kubectl\n              e2e-run=0513d69e-cea3-4743-9d3b-d041cd37ab4c\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:47:29.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2353" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":306,"completed":14,"skipped":253,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 94 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:48:19.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9153" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":306,"completed":15,"skipped":259,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 19 lines ...
Oct 24 09:49:27.900: INFO: stderr: ""
Oct 24 09:49:27.900: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:49:27.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4999" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":306,"completed":16,"skipped":268,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 24 09:49:28.308: INFO: Waiting up to 5m0s for pod "downwardapi-volume-669faa7f-5848-4cbf-841d-7b9c0d7cde67" in namespace "downward-api-8252" to be "Succeeded or Failed"
Oct 24 09:49:28.350: INFO: Pod "downwardapi-volume-669faa7f-5848-4cbf-841d-7b9c0d7cde67": Phase="Pending", Reason="", readiness=false. Elapsed: 41.98695ms
Oct 24 09:49:30.420: INFO: Pod "downwardapi-volume-669faa7f-5848-4cbf-841d-7b9c0d7cde67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.112119702s
STEP: Saw pod success
Oct 24 09:49:30.420: INFO: Pod "downwardapi-volume-669faa7f-5848-4cbf-841d-7b9c0d7cde67" satisfied condition "Succeeded or Failed"
Oct 24 09:49:30.483: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downwardapi-volume-669faa7f-5848-4cbf-841d-7b9c0d7cde67 container client-container: <nil>
STEP: delete the pod
Oct 24 09:49:30.754: INFO: Waiting for pod downwardapi-volume-669faa7f-5848-4cbf-841d-7b9c0d7cde67 to disappear
Oct 24 09:49:30.833: INFO: Pod downwardapi-volume-669faa7f-5848-4cbf-841d-7b9c0d7cde67 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:49:30.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8252" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":17,"skipped":304,"failed":0}
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Oct 24 09:49:31.163: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test substitution in container's args
Oct 24 09:49:31.404: INFO: Waiting up to 5m0s for pod "var-expansion-93155497-d6d6-42cc-89f8-acd4ce6148be" in namespace "var-expansion-6831" to be "Succeeded or Failed"
Oct 24 09:49:31.444: INFO: Pod "var-expansion-93155497-d6d6-42cc-89f8-acd4ce6148be": Phase="Pending", Reason="", readiness=false. Elapsed: 39.205533ms
Oct 24 09:49:33.486: INFO: Pod "var-expansion-93155497-d6d6-42cc-89f8-acd4ce6148be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081679012s
Oct 24 09:49:35.526: INFO: Pod "var-expansion-93155497-d6d6-42cc-89f8-acd4ce6148be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121393215s
STEP: Saw pod success
Oct 24 09:49:35.526: INFO: Pod "var-expansion-93155497-d6d6-42cc-89f8-acd4ce6148be" satisfied condition "Succeeded or Failed"
Oct 24 09:49:35.566: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod var-expansion-93155497-d6d6-42cc-89f8-acd4ce6148be container dapi-container: <nil>
STEP: delete the pod
Oct 24 09:49:35.662: INFO: Waiting for pod var-expansion-93155497-d6d6-42cc-89f8-acd4ce6148be to disappear
Oct 24 09:49:35.701: INFO: Pod var-expansion-93155497-d6d6-42cc-89f8-acd4ce6148be no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:49:35.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6831" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":306,"completed":18,"skipped":307,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
STEP: creating replication controller nodeport-test in namespace services-946
I1024 09:49:36.248783  143945 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-946, replica count: 2
Oct 24 09:49:39.299: INFO: Creating new exec pod
I1024 09:49:39.299433  143945 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 24 09:49:42.528: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-946 exec execpodnfqzj -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Oct 24 09:49:44.257: INFO: rc: 1
Oct 24 09:49:44.257: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-946 exec execpodnfqzj -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 nodeport-test 80
nc: connect to nodeport-test port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 24 09:49:45.257: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-946 exec execpodnfqzj -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Oct 24 09:49:46.754: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n"
Oct 24 09:49:46.754: INFO: stdout: ""
Oct 24 09:49:46.754: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-946 exec execpodnfqzj -- /bin/sh -x -c nc -zv -t -w 2 10.0.198.154 80'
... skipping 14 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:49:49.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-946" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":306,"completed":19,"skipped":357,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Oct 24 09:49:52.404: INFO: Initial restart count of pod busybox-19315fb4-4dfc-4b6b-91de-5f089ac68ff3 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:53:54.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7452" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":306,"completed":20,"skipped":358,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 24 09:53:54.137: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct 24 09:53:54.504: INFO: Waiting up to 5m0s for pod "pod-2830b937-559a-45ce-b58c-082b1fccc2bb" in namespace "emptydir-4350" to be "Succeeded or Failed"
Oct 24 09:53:54.553: INFO: Pod "pod-2830b937-559a-45ce-b58c-082b1fccc2bb": Phase="Pending", Reason="", readiness=false. Elapsed: 49.096926ms
Oct 24 09:53:56.613: INFO: Pod "pod-2830b937-559a-45ce-b58c-082b1fccc2bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.108956941s
STEP: Saw pod success
Oct 24 09:53:56.613: INFO: Pod "pod-2830b937-559a-45ce-b58c-082b1fccc2bb" satisfied condition "Succeeded or Failed"
Oct 24 09:53:56.674: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-2830b937-559a-45ce-b58c-082b1fccc2bb container test-container: <nil>
STEP: delete the pod
Oct 24 09:53:56.943: INFO: Waiting for pod pod-2830b937-559a-45ce-b58c-082b1fccc2bb to disappear
Oct 24 09:53:56.988: INFO: Pod pod-2830b937-559a-45ce-b58c-082b1fccc2bb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:53:56.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4350" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":21,"skipped":385,"failed":0}
S
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 24 09:53:57.085: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 24 09:53:59.550: INFO: Deleting pod "var-expansion-d0d15eba-f7ec-4a6b-a0f9-732a6e04b492" in namespace "var-expansion-9210"
Oct 24 09:53:59.601: INFO: Wait up to 5m0s for pod "var-expansion-d0d15eba-f7ec-4a6b-a0f9-732a6e04b492" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:54:29.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9210" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":306,"completed":22,"skipped":386,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 24 09:54:30.306: INFO: Waiting up to 5m0s for pod "busybox-user-65534-92302c3c-2e1d-4b46-9a53-a3637c365d44" in namespace "security-context-test-5569" to be "Succeeded or Failed"
Oct 24 09:54:30.346: INFO: Pod "busybox-user-65534-92302c3c-2e1d-4b46-9a53-a3637c365d44": Phase="Pending", Reason="", readiness=false. Elapsed: 39.440956ms
Oct 24 09:54:32.386: INFO: Pod "busybox-user-65534-92302c3c-2e1d-4b46-9a53-a3637c365d44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.079581587s
Oct 24 09:54:32.386: INFO: Pod "busybox-user-65534-92302c3c-2e1d-4b46-9a53-a3637c365d44" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:54:32.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5569" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":23,"skipped":405,"failed":0}
SSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:54:44.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7028" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":306,"completed":24,"skipped":409,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
... skipping 18 lines ...
STEP: Deleting second CR
Oct 24 09:55:35.228: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-24T09:54:54Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-24T09:55:15Z]] name:name2 resourceVersion:5012 uid:42a795e5-afe6-4e60-b83b-9716b206f231] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:55:45.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-1128" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":306,"completed":25,"skipped":410,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Networking
... skipping 40 lines ...
Oct 24 09:56:10.652: INFO: reached 10.64.1.21 after 0/1 tries
Oct 24 09:56:10.652: INFO: Going to retry 0 out of 3 pods....
[AfterEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:56:10.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-141" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":306,"completed":26,"skipped":420,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:56:15.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-482" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":306,"completed":27,"skipped":457,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-cd67e7a5-0764-4c16-8257-537ba0ff1d54
STEP: Creating a pod to test consume secrets
Oct 24 09:56:15.829: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3c1af784-298e-46c5-87c4-1c0de266feda" in namespace "projected-2738" to be "Succeeded or Failed"
Oct 24 09:56:15.882: INFO: Pod "pod-projected-secrets-3c1af784-298e-46c5-87c4-1c0de266feda": Phase="Pending", Reason="", readiness=false. Elapsed: 52.734556ms
Oct 24 09:56:17.978: INFO: Pod "pod-projected-secrets-3c1af784-298e-46c5-87c4-1c0de266feda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.149239526s
STEP: Saw pod success
Oct 24 09:56:17.978: INFO: Pod "pod-projected-secrets-3c1af784-298e-46c5-87c4-1c0de266feda" satisfied condition "Succeeded or Failed"
Oct 24 09:56:18.069: INFO: Trying to get logs from node bootstrap-e2e-minion-group-bf58 pod pod-projected-secrets-3c1af784-298e-46c5-87c4-1c0de266feda container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 24 09:56:18.340: INFO: Waiting for pod pod-projected-secrets-3c1af784-298e-46c5-87c4-1c0de266feda to disappear
Oct 24 09:56:18.379: INFO: Pod pod-projected-secrets-3c1af784-298e-46c5-87c4-1c0de266feda no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:56:18.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2738" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":306,"completed":28,"skipped":458,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:56:24.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2977" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":306,"completed":29,"skipped":462,"failed":0}

------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 24 09:56:24.989: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0666 on tmpfs
Oct 24 09:56:26.047: INFO: Waiting up to 5m0s for pod "pod-e5f62900-e8fe-4a55-82ee-1e540bf88c18" in namespace "emptydir-8157" to be "Succeeded or Failed"
Oct 24 09:56:26.089: INFO: Pod "pod-e5f62900-e8fe-4a55-82ee-1e540bf88c18": Phase="Pending", Reason="", readiness=false. Elapsed: 41.582623ms
Oct 24 09:56:28.130: INFO: Pod "pod-e5f62900-e8fe-4a55-82ee-1e540bf88c18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.082439567s
STEP: Saw pod success
Oct 24 09:56:28.130: INFO: Pod "pod-e5f62900-e8fe-4a55-82ee-1e540bf88c18" satisfied condition "Succeeded or Failed"
Oct 24 09:56:28.170: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-e5f62900-e8fe-4a55-82ee-1e540bf88c18 container test-container: <nil>
STEP: delete the pod
Oct 24 09:56:28.305: INFO: Waiting for pod pod-e5f62900-e8fe-4a55-82ee-1e540bf88c18 to disappear
Oct 24 09:56:28.345: INFO: Pod pod-e5f62900-e8fe-4a55-82ee-1e540bf88c18 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:56:28.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8157" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":30,"skipped":462,"failed":0}
S
------------------------------
[sig-node] PodTemplates 
  should delete a collection of pod templates [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] PodTemplates
... skipping 14 lines ...
STEP: check that the list of pod templates matches the requested quantity
Oct 24 09:56:28.905: INFO: requesting list of pod templates to confirm quantity
[AfterEach] [sig-node] PodTemplates
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:56:28.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-9151" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":306,"completed":31,"skipped":463,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 60 lines ...
Oct 24 09:56:37.413: INFO: stderr: ""
Oct 24 09:56:37.413: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:56:37.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4783" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":306,"completed":32,"skipped":470,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 24 09:56:37.942: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-91addf13-3aa9-46f0-8f7e-cd1afac12807" in namespace "security-context-test-8040" to be "Succeeded or Failed"
Oct 24 09:56:38.013: INFO: Pod "busybox-privileged-false-91addf13-3aa9-46f0-8f7e-cd1afac12807": Phase="Pending", Reason="", readiness=false. Elapsed: 70.422168ms
Oct 24 09:56:40.053: INFO: Pod "busybox-privileged-false-91addf13-3aa9-46f0-8f7e-cd1afac12807": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.110340854s
Oct 24 09:56:40.053: INFO: Pod "busybox-privileged-false-91addf13-3aa9-46f0-8f7e-cd1afac12807" satisfied condition "Succeeded or Failed"
Oct 24 09:56:40.097: INFO: Got logs for pod "busybox-privileged-false-91addf13-3aa9-46f0-8f7e-cd1afac12807": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:56:40.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8040" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":33,"skipped":496,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 09:56:51.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4174" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":306,"completed":34,"skipped":503,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Oct 24 09:56:54.183: INFO: Initial restart count of pod test-webserver-7960ea56-d69b-4a06-9cd0-a80af2e2e84b is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:00:56.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5419" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":306,"completed":35,"skipped":522,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Oct 24 10:00:56.605: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=kubectl-985 proxy --unix-socket=/tmp/kubectl-proxy-unix646767828/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:00:56.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-985" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":306,"completed":36,"skipped":533,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 51 lines ...
Oct 24 10:01:07.869: INFO: stderr: ""
Oct 24 10:01:07.869: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:01:07.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1893" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":306,"completed":37,"skipped":547,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 11 lines ...
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:01:08.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9256" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":306,"completed":38,"skipped":548,"failed":0}
SSSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] 
  removing taint cancels eviction [Disruptive] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 20 lines ...
STEP: Waiting some time to make sure that toleration time passed.
Oct 24 10:03:24.452: INFO: Pod wasn't evicted. Test successful
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:03:24.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-single-pod-4477" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":306,"completed":39,"skipped":552,"failed":0}
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 54 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:03:29.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2674" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":306,"completed":40,"skipped":561,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 24 10:03:30.130: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff721b84-9308-417a-82d5-b59b17c42c70" in namespace "projected-1839" to be "Succeeded or Failed"
Oct 24 10:03:30.180: INFO: Pod "downwardapi-volume-ff721b84-9308-417a-82d5-b59b17c42c70": Phase="Pending", Reason="", readiness=false. Elapsed: 49.478363ms
Oct 24 10:03:32.252: INFO: Pod "downwardapi-volume-ff721b84-9308-417a-82d5-b59b17c42c70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.122148868s
STEP: Saw pod success
Oct 24 10:03:32.252: INFO: Pod "downwardapi-volume-ff721b84-9308-417a-82d5-b59b17c42c70" satisfied condition "Succeeded or Failed"
Oct 24 10:03:32.350: INFO: Trying to get logs from node bootstrap-e2e-minion-group-g27b pod downwardapi-volume-ff721b84-9308-417a-82d5-b59b17c42c70 container client-container: <nil>
STEP: delete the pod
Oct 24 10:03:32.684: INFO: Waiting for pod downwardapi-volume-ff721b84-9308-417a-82d5-b59b17c42c70 to disappear
Oct 24 10:03:32.742: INFO: Pod downwardapi-volume-ff721b84-9308-417a-82d5-b59b17c42c70 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:03:32.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1839" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":306,"completed":41,"skipped":582,"failed":0}

------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 131 lines ...
Oct 24 10:03:56.145: INFO: stderr: ""
Oct 24 10:03:56.145: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:03:56.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4808" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":306,"completed":42,"skipped":582,"failed":0}

------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] StatefulSet
... skipping 127 lines ...
Oct 24 10:04:51.005: INFO: ss-1  bootstrap-e2e-minion-group-g27b  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-24 10:04:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-24 10:04:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-24 10:04:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-24 10:04:17 +0000 UTC  }]
Oct 24 10:04:51.005: INFO: 
Oct 24 10:04:51.005: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8297
Oct 24 10:04:52.045: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:04:52.378: INFO: rc: 1
Oct 24 10:04:52.378: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Oct 24 10:05:02.378: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:05:02.740: INFO: rc: 1
Oct 24 10:05:02.740: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Oct 24 10:05:12.740: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:05:13.079: INFO: rc: 1
Oct 24 10:05:13.079: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Oct 24 10:05:23.079: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:05:23.410: INFO: rc: 1
Oct 24 10:05:23.410: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Oct 24 10:05:33.410: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:05:33.643: INFO: rc: 1
Oct 24 10:05:33.643: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:05:43.644: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:05:43.880: INFO: rc: 1
Oct 24 10:05:43.880: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:05:53.880: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:05:54.471: INFO: rc: 1
Oct 24 10:05:54.471: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:06:04.471: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:06:04.718: INFO: rc: 1
Oct 24 10:06:04.718: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:06:14.718: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:06:14.966: INFO: rc: 1
Oct 24 10:06:14.966: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:06:24.967: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:06:25.274: INFO: rc: 1
Oct 24 10:06:25.274: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:06:35.275: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:06:35.841: INFO: rc: 1
Oct 24 10:06:35.842: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:06:45.842: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:06:46.135: INFO: rc: 1
Oct 24 10:06:46.135: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:06:56.135: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:06:56.368: INFO: rc: 1
Oct 24 10:06:56.368: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:07:06.369: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:07:06.627: INFO: rc: 1
Oct 24 10:07:06.627: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:07:16.627: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:07:17.046: INFO: rc: 1
Oct 24 10:07:17.046: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:07:27.047: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:07:27.297: INFO: rc: 1
Oct 24 10:07:27.297: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:07:37.297: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:07:37.546: INFO: rc: 1
Oct 24 10:07:37.546: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:07:47.546: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:07:47.822: INFO: rc: 1
Oct 24 10:07:47.822: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:07:57.822: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:07:58.085: INFO: rc: 1
Oct 24 10:07:58.085: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:08:08.085: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:08:08.435: INFO: rc: 1
Oct 24 10:08:08.435: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:08:18.436: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:08:18.674: INFO: rc: 1
Oct 24 10:08:18.674: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:08:28.674: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:08:28.920: INFO: rc: 1
Oct 24 10:08:28.920: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:08:38.920: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:08:39.228: INFO: rc: 1
Oct 24 10:08:39.229: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:08:49.229: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:08:49.465: INFO: rc: 1
Oct 24 10:08:49.465: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:08:59.465: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:08:59.720: INFO: rc: 1
Oct 24 10:08:59.720: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:09:09.720: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:09:09.963: INFO: rc: 1
Oct 24 10:09:09.963: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:09:19.964: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:09:20.209: INFO: rc: 1
Oct 24 10:09:20.209: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:09:30.210: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:09:30.465: INFO: rc: 1
Oct 24 10:09:30.465: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:09:40.465: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:09:40.707: INFO: rc: 1
Oct 24 10:09:40.708: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:09:50.708: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:09:51.028: INFO: rc: 1
Oct 24 10:09:51.028: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 24 10:10:01.028: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8297 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:10:01.432: INFO: rc: 1
Oct 24 10:10:01.432: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: 
Oct 24 10:10:01.432: INFO: Scaling statefulset ss to 0
Oct 24 10:10:01.552: INFO: Waiting for statefulset status.replicas updated to 0
... skipping 13 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":306,"completed":43,"skipped":582,"failed":0}
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 20 lines ...
Oct 24 10:10:12.765: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8703  123a9a57-3f1e-4613-ad5f-6b72772d5df6 7297 0 2020-10-24 10:10:02 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-10-24 10:10:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Oct 24 10:10:12.765: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8703  123a9a57-3f1e-4613-ad5f-6b72772d5df6 7298 0 2020-10-24 10:10:02 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-10-24 10:10:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:10:12.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8703" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":306,"completed":44,"skipped":582,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 24 10:10:13.088: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bcf9a3f3-1482-47a3-88c5-0e81c5edae23" in namespace "projected-6475" to be "Succeeded or Failed"
Oct 24 10:10:13.128: INFO: Pod "downwardapi-volume-bcf9a3f3-1482-47a3-88c5-0e81c5edae23": Phase="Pending", Reason="", readiness=false. Elapsed: 39.356595ms
Oct 24 10:10:15.167: INFO: Pod "downwardapi-volume-bcf9a3f3-1482-47a3-88c5-0e81c5edae23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.079004041s
STEP: Saw pod success
Oct 24 10:10:15.167: INFO: Pod "downwardapi-volume-bcf9a3f3-1482-47a3-88c5-0e81c5edae23" satisfied condition "Succeeded or Failed"
Oct 24 10:10:15.207: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downwardapi-volume-bcf9a3f3-1482-47a3-88c5-0e81c5edae23 container client-container: <nil>
STEP: delete the pod
Oct 24 10:10:15.331: INFO: Waiting for pod downwardapi-volume-bcf9a3f3-1482-47a3-88c5-0e81c5edae23 to disappear
Oct 24 10:10:15.372: INFO: Pod downwardapi-volume-bcf9a3f3-1482-47a3-88c5-0e81c5edae23 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:10:15.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6475" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":306,"completed":45,"skipped":619,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 24 10:10:15.459: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap that has name configmap-test-emptyKey-53763314-ce28-4820-ba12-40a1e0b557fb
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:10:15.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-57" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":306,"completed":46,"skipped":636,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Oct 24 10:10:30.570: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:30.659: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:31.234: INFO: Unable to read jessie_udp@dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:31.372: INFO: Unable to read jessie_tcp@dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:31.478: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:31.617: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:31.941: INFO: Lookups using dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3 failed for: [wheezy_udp@dns-test-service.dns-4437.svc.cluster.local wheezy_tcp@dns-test-service.dns-4437.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local jessie_udp@dns-test-service.dns-4437.svc.cluster.local jessie_tcp@dns-test-service.dns-4437.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local]

Oct 24 10:10:37.004: INFO: Unable to read wheezy_udp@dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:37.101: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:37.216: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:37.288: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:37.821: INFO: Unable to read jessie_udp@dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:37.882: INFO: Unable to read jessie_tcp@dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:37.962: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:38.052: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:38.494: INFO: Lookups using dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3 failed for: [wheezy_udp@dns-test-service.dns-4437.svc.cluster.local wheezy_tcp@dns-test-service.dns-4437.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local jessie_udp@dns-test-service.dns-4437.svc.cluster.local jessie_tcp@dns-test-service.dns-4437.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local]

Oct 24 10:10:41.983: INFO: Unable to read wheezy_udp@dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:42.030: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:42.071: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:42.112: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:42.403: INFO: Unable to read jessie_udp@dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:42.444: INFO: Unable to read jessie_tcp@dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:42.485: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:42.526: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:42.778: INFO: Lookups using dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3 failed for: [wheezy_udp@dns-test-service.dns-4437.svc.cluster.local wheezy_tcp@dns-test-service.dns-4437.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local jessie_udp@dns-test-service.dns-4437.svc.cluster.local jessie_tcp@dns-test-service.dns-4437.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local]

Oct 24 10:10:47.021: INFO: Unable to read wheezy_udp@dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:47.062: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:47.103: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:47.144: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:47.435: INFO: Unable to read jessie_udp@dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:47.477: INFO: Unable to read jessie_tcp@dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:47.598: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local from pod dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3: the server could not find the requested resource (get pods dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3)
Oct 24 10:10:47.889: INFO: Lookups using dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3 failed for: [wheezy_udp@dns-test-service.dns-4437.svc.cluster.local wheezy_tcp@dns-test-service.dns-4437.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local jessie_udp@dns-test-service.dns-4437.svc.cluster.local jessie_tcp@dns-test-service.dns-4437.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4437.svc.cluster.local]

Oct 24 10:10:53.599: INFO: DNS probes using dns-4437/dns-test-a774bba8-4573-49f0-bd02-c6dc8652dca3 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:10:54.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4437" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":306,"completed":47,"skipped":637,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Deployment
... skipping 43 lines ...
Oct 24 10:11:15.171: INFO: Pod "test-rollover-deployment-668db69979-m6r6j" is available:
&Pod{ObjectMeta:{test-rollover-deployment-668db69979-m6r6j test-rollover-deployment-668db69979- deployment-1173  6e47164b-4a23-4573-9ef9-bd9cb3bea32c 7517 0 2020-10-24 10:11:02 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:668db69979] map[] [{apps/v1 ReplicaSet test-rollover-deployment-668db69979 f63b9645-9074-4e22-b69a-bc5f31c45462 0xc000b91807 0xc000b91808}] []  [{kube-controller-manager Update v1 2020-10-24 10:11:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f63b9645-9074-4e22-b69a-bc5f31c45462\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-24 10:11:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.1.38\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-26qwx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-26qwx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-26qwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-vkx8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:11:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:11:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:11:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:11:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:10.64.1.38,StartTime:2020-10-24 10:11:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-24 10:11:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://5cb70c8d0c53bcc41aff18814b5f64b1302aee8cc94529a7a3ae0783145be75d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.1.38,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:11:15.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1173" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":306,"completed":48,"skipped":644,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 24 10:11:15.361: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 24 10:11:18.249: INFO: Deleting pod "var-expansion-3df38eda-a014-478a-bb37-b93bea4a0073" in namespace "var-expansion-1090"
Oct 24 10:11:18.292: INFO: Wait up to 5m0s for pod "var-expansion-3df38eda-a014-478a-bb37-b93bea4a0073" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:12:28.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1090" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":306,"completed":49,"skipped":653,"failed":0}
SSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Oct 24 10:12:28.455: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
Oct 24 10:12:28.696: INFO: Waiting up to 5m0s for pod "downward-api-652fce90-0ff9-4815-b750-2a81298c58db" in namespace "downward-api-362" to be "Succeeded or Failed"
Oct 24 10:12:28.736: INFO: Pod "downward-api-652fce90-0ff9-4815-b750-2a81298c58db": Phase="Pending", Reason="", readiness=false. Elapsed: 40.714904ms
Oct 24 10:12:30.788: INFO: Pod "downward-api-652fce90-0ff9-4815-b750-2a81298c58db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.092117419s
STEP: Saw pod success
Oct 24 10:12:30.788: INFO: Pod "downward-api-652fce90-0ff9-4815-b750-2a81298c58db" satisfied condition "Succeeded or Failed"
Oct 24 10:12:30.834: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downward-api-652fce90-0ff9-4815-b750-2a81298c58db container dapi-container: <nil>
STEP: delete the pod
Oct 24 10:12:31.052: INFO: Waiting for pod downward-api-652fce90-0ff9-4815-b750-2a81298c58db to disappear
Oct 24 10:12:31.120: INFO: Pod downward-api-652fce90-0ff9-4815-b750-2a81298c58db no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:12:31.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-362" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":306,"completed":50,"skipped":657,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-map-ac7d5ee8-7e14-4181-92c1-1e269d06bdf5
STEP: Creating a pod to test consume secrets
Oct 24 10:12:31.744: INFO: Waiting up to 5m0s for pod "pod-secrets-bb82586b-9057-4f3b-a696-3b3b9aa9a1c1" in namespace "secrets-4240" to be "Succeeded or Failed"
Oct 24 10:12:31.783: INFO: Pod "pod-secrets-bb82586b-9057-4f3b-a696-3b3b9aa9a1c1": Phase="Pending", Reason="", readiness=false. Elapsed: 39.558966ms
Oct 24 10:12:33.824: INFO: Pod "pod-secrets-bb82586b-9057-4f3b-a696-3b3b9aa9a1c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.080369397s
STEP: Saw pod success
Oct 24 10:12:33.824: INFO: Pod "pod-secrets-bb82586b-9057-4f3b-a696-3b3b9aa9a1c1" satisfied condition "Succeeded or Failed"
Oct 24 10:12:33.864: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-secrets-bb82586b-9057-4f3b-a696-3b3b9aa9a1c1 container secret-volume-test: <nil>
STEP: delete the pod
Oct 24 10:12:33.966: INFO: Waiting for pod pod-secrets-bb82586b-9057-4f3b-a696-3b3b9aa9a1c1 to disappear
Oct 24 10:12:34.006: INFO: Pod pod-secrets-bb82586b-9057-4f3b-a696-3b3b9aa9a1c1 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:12:34.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4240" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":51,"skipped":691,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 24 10:12:34.356: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-4c0bfb6b-cfb0-40b8-9e84-190dc9b63d48" in namespace "security-context-test-1902" to be "Succeeded or Failed"
Oct 24 10:12:34.588: INFO: Pod "busybox-readonly-false-4c0bfb6b-cfb0-40b8-9e84-190dc9b63d48": Phase="Pending", Reason="", readiness=false. Elapsed: 232.562892ms
Oct 24 10:12:36.639: INFO: Pod "busybox-readonly-false-4c0bfb6b-cfb0-40b8-9e84-190dc9b63d48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.283593968s
Oct 24 10:12:36.639: INFO: Pod "busybox-readonly-false-4c0bfb6b-cfb0-40b8-9e84-190dc9b63d48" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:12:36.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1902" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":306,"completed":52,"skipped":703,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 24 10:12:37.290: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f420b646-0778-4a60-9f8a-84c1c516e509" in namespace "projected-2031" to be "Succeeded or Failed"
Oct 24 10:12:37.374: INFO: Pod "downwardapi-volume-f420b646-0778-4a60-9f8a-84c1c516e509": Phase="Pending", Reason="", readiness=false. Elapsed: 84.263802ms
Oct 24 10:12:39.414: INFO: Pod "downwardapi-volume-f420b646-0778-4a60-9f8a-84c1c516e509": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.124114572s
STEP: Saw pod success
Oct 24 10:12:39.414: INFO: Pod "downwardapi-volume-f420b646-0778-4a60-9f8a-84c1c516e509" satisfied condition "Succeeded or Failed"
Oct 24 10:12:39.453: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downwardapi-volume-f420b646-0778-4a60-9f8a-84c1c516e509 container client-container: <nil>
STEP: delete the pod
Oct 24 10:12:39.551: INFO: Waiting for pod downwardapi-volume-f420b646-0778-4a60-9f8a-84c1c516e509 to disappear
Oct 24 10:12:39.591: INFO: Pod downwardapi-volume-f420b646-0778-4a60-9f8a-84c1c516e509 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:12:39.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2031" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":53,"skipped":744,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Discovery 
  should validate PreferredVersion for each APIGroup [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Discovery
... skipping 96 lines ...
Oct 24 10:12:41.735: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}]
Oct 24 10:12:41.735: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1
[AfterEach] [sig-api-machinery] Discovery
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:12:41.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-3368" for this suite.
•{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":306,"completed":54,"skipped":755,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-4ff86bc4-a693-43bf-a659-37ece5fa202b
STEP: Creating a pod to test consume configMaps
Oct 24 10:12:42.123: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-00908bd6-8f20-4dde-80ca-71f10ba354f6" in namespace "projected-5670" to be "Succeeded or Failed"
Oct 24 10:12:42.162: INFO: Pod "pod-projected-configmaps-00908bd6-8f20-4dde-80ca-71f10ba354f6": Phase="Pending", Reason="", readiness=false. Elapsed: 39.499808ms
Oct 24 10:12:44.202: INFO: Pod "pod-projected-configmaps-00908bd6-8f20-4dde-80ca-71f10ba354f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.07942515s
STEP: Saw pod success
Oct 24 10:12:44.202: INFO: Pod "pod-projected-configmaps-00908bd6-8f20-4dde-80ca-71f10ba354f6" satisfied condition "Succeeded or Failed"
Oct 24 10:12:44.248: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-projected-configmaps-00908bd6-8f20-4dde-80ca-71f10ba354f6 container agnhost-container: <nil>
STEP: delete the pod
Oct 24 10:12:44.339: INFO: Waiting for pod pod-projected-configmaps-00908bd6-8f20-4dde-80ca-71f10ba354f6 to disappear
Oct 24 10:12:44.378: INFO: Pod pod-projected-configmaps-00908bd6-8f20-4dde-80ca-71f10ba354f6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:12:44.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5670" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":306,"completed":55,"skipped":820,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:13:01.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1597" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":306,"completed":56,"skipped":840,"failed":0}

------------------------------
[sig-api-machinery] Events 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Events
... skipping 11 lines ...
STEP: deleting the test event
STEP: listing all events in all namespaces
[AfterEach] [sig-api-machinery] Events
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:13:02.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1862" for this suite.
•{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":306,"completed":57,"skipped":840,"failed":0}
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] StatefulSet
... skipping 22 lines ...
Oct 24 10:13:43.683: INFO: Waiting for statefulset status.replicas updated to 0
Oct 24 10:13:43.722: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:13:43.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2785" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":306,"completed":58,"skipped":842,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Events 
  should delete a collection of events [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Events
... skipping 14 lines ...
STEP: check that the list of events matches the requested quantity
Oct 24 10:13:44.384: INFO: requesting list of events to confirm quantity
[AfterEach] [sig-api-machinery] Events
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:13:44.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2559" for this suite.
•{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":306,"completed":59,"skipped":869,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Lease
... skipping 5 lines ...
[It] lease API should be available [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Lease
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:13:45.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-4408" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":306,"completed":60,"skipped":917,"failed":0}
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-map-328f3797-c4f3-4141-9471-f3ebbb5a9a0b
STEP: Creating a pod to test consume configMaps
Oct 24 10:13:45.901: INFO: Waiting up to 5m0s for pod "pod-configmaps-ecc0b98e-9d91-4ce4-b95b-8dacb0542fd4" in namespace "configmap-1809" to be "Succeeded or Failed"
Oct 24 10:13:45.940: INFO: Pod "pod-configmaps-ecc0b98e-9d91-4ce4-b95b-8dacb0542fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 38.995041ms
Oct 24 10:13:47.980: INFO: Pod "pod-configmaps-ecc0b98e-9d91-4ce4-b95b-8dacb0542fd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.07866185s
STEP: Saw pod success
Oct 24 10:13:47.980: INFO: Pod "pod-configmaps-ecc0b98e-9d91-4ce4-b95b-8dacb0542fd4" satisfied condition "Succeeded or Failed"
Oct 24 10:13:48.022: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-configmaps-ecc0b98e-9d91-4ce4-b95b-8dacb0542fd4 container configmap-volume-test: <nil>
STEP: delete the pod
Oct 24 10:13:48.132: INFO: Waiting for pod pod-configmaps-ecc0b98e-9d91-4ce4-b95b-8dacb0542fd4 to disappear
Oct 24 10:13:48.187: INFO: Pod pod-configmaps-ecc0b98e-9d91-4ce4-b95b-8dacb0542fd4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:13:48.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1809" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":61,"skipped":921,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 127 lines ...
Oct 24 10:15:48.086: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"8397"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:15:48.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9551" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":306,"completed":62,"skipped":932,"failed":0}
SSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: reading a file in the container
Oct 24 10:15:52.328: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-9234 pod-service-account-ad9fe108-0988-42c1-928e-36580ca6fb7c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:15:52.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9234" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":306,"completed":63,"skipped":941,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Oct 24 10:15:52.942: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
Oct 24 10:15:53.201: INFO: Waiting up to 5m0s for pod "downward-api-2080cab1-ba24-4379-b48d-05ef89030386" in namespace "downward-api-2972" to be "Succeeded or Failed"
Oct 24 10:15:53.255: INFO: Pod "downward-api-2080cab1-ba24-4379-b48d-05ef89030386": Phase="Pending", Reason="", readiness=false. Elapsed: 53.206213ms
Oct 24 10:15:55.295: INFO: Pod "downward-api-2080cab1-ba24-4379-b48d-05ef89030386": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.093139153s
STEP: Saw pod success
Oct 24 10:15:55.295: INFO: Pod "downward-api-2080cab1-ba24-4379-b48d-05ef89030386" satisfied condition "Succeeded or Failed"
Oct 24 10:15:55.334: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downward-api-2080cab1-ba24-4379-b48d-05ef89030386 container dapi-container: <nil>
STEP: delete the pod
Oct 24 10:15:55.437: INFO: Waiting for pod downward-api-2080cab1-ba24-4379-b48d-05ef89030386 to disappear
Oct 24 10:15:55.476: INFO: Pod downward-api-2080cab1-ba24-4379-b48d-05ef89030386 no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:15:55.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2972" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":306,"completed":64,"skipped":951,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Updating configmap configmap-test-upd-51af263b-3eab-4d64-ba3e-88a206a2214f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:16:00.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3634" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":65,"skipped":967,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Oct 24 10:16:00.454: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
Oct 24 10:16:00.705: INFO: Waiting up to 5m0s for pod "downward-api-26c6ffdf-ab54-431d-bf6a-d99ed0deec1e" in namespace "downward-api-7125" to be "Succeeded or Failed"
Oct 24 10:16:00.745: INFO: Pod "downward-api-26c6ffdf-ab54-431d-bf6a-d99ed0deec1e": Phase="Pending", Reason="", readiness=false. Elapsed: 39.454153ms
Oct 24 10:16:02.800: INFO: Pod "downward-api-26c6ffdf-ab54-431d-bf6a-d99ed0deec1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.094827846s
STEP: Saw pod success
Oct 24 10:16:02.800: INFO: Pod "downward-api-26c6ffdf-ab54-431d-bf6a-d99ed0deec1e" satisfied condition "Succeeded or Failed"
Oct 24 10:16:02.879: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downward-api-26c6ffdf-ab54-431d-bf6a-d99ed0deec1e container dapi-container: <nil>
STEP: delete the pod
Oct 24 10:16:03.089: INFO: Waiting for pod downward-api-26c6ffdf-ab54-431d-bf6a-d99ed0deec1e to disappear
Oct 24 10:16:03.146: INFO: Pod downward-api-26c6ffdf-ab54-431d-bf6a-d99ed0deec1e no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:16:03.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7125" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":306,"completed":66,"skipped":978,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] [sig-node] Pods Extended
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:16:03.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7462" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":306,"completed":67,"skipped":1006,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-859df02a-d1de-49b7-a0cf-d68786aa6b6c
STEP: Creating a pod to test consume configMaps
Oct 24 10:16:04.561: INFO: Waiting up to 5m0s for pod "pod-configmaps-c1683080-5120-4bd8-98e0-e4371d6cacf5" in namespace "configmap-6325" to be "Succeeded or Failed"
Oct 24 10:16:04.654: INFO: Pod "pod-configmaps-c1683080-5120-4bd8-98e0-e4371d6cacf5": Phase="Pending", Reason="", readiness=false. Elapsed: 93.197019ms
Oct 24 10:16:06.694: INFO: Pod "pod-configmaps-c1683080-5120-4bd8-98e0-e4371d6cacf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13375272s
Oct 24 10:16:08.735: INFO: Pod "pod-configmaps-c1683080-5120-4bd8-98e0-e4371d6cacf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.174699074s
STEP: Saw pod success
Oct 24 10:16:08.735: INFO: Pod "pod-configmaps-c1683080-5120-4bd8-98e0-e4371d6cacf5" satisfied condition "Succeeded or Failed"
Oct 24 10:16:08.775: INFO: Trying to get logs from node bootstrap-e2e-minion-group-g27b pod pod-configmaps-c1683080-5120-4bd8-98e0-e4371d6cacf5 container configmap-volume-test: <nil>
STEP: delete the pod
Oct 24 10:16:08.961: INFO: Waiting for pod pod-configmaps-c1683080-5120-4bd8-98e0-e4371d6cacf5 to disappear
Oct 24 10:16:09.006: INFO: Pod pod-configmaps-c1683080-5120-4bd8-98e0-e4371d6cacf5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:16:09.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6325" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":306,"completed":68,"skipped":1054,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Oct 24 10:16:12.421: INFO: Successfully updated pod "annotationupdate3e6d7e87-df39-461d-86ad-af6a76adb3fa"
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:16:14.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7438" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":306,"completed":69,"skipped":1055,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Oct 24 10:16:14.614: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test env composition
Oct 24 10:16:15.052: INFO: Waiting up to 5m0s for pod "var-expansion-a81c9935-ba04-43ed-9fde-b0eb959e6440" in namespace "var-expansion-1434" to be "Succeeded or Failed"
Oct 24 10:16:15.097: INFO: Pod "var-expansion-a81c9935-ba04-43ed-9fde-b0eb959e6440": Phase="Pending", Reason="", readiness=false. Elapsed: 45.122388ms
Oct 24 10:16:17.156: INFO: Pod "var-expansion-a81c9935-ba04-43ed-9fde-b0eb959e6440": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.103748213s
STEP: Saw pod success
Oct 24 10:16:17.156: INFO: Pod "var-expansion-a81c9935-ba04-43ed-9fde-b0eb959e6440" satisfied condition "Succeeded or Failed"
Oct 24 10:16:17.203: INFO: Trying to get logs from node bootstrap-e2e-minion-group-g27b pod var-expansion-a81c9935-ba04-43ed-9fde-b0eb959e6440 container dapi-container: <nil>
STEP: delete the pod
Oct 24 10:16:17.549: INFO: Waiting for pod var-expansion-a81c9935-ba04-43ed-9fde-b0eb959e6440 to disappear
Oct 24 10:16:17.589: INFO: Pod var-expansion-a81c9935-ba04-43ed-9fde-b0eb959e6440 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:16:17.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1434" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":306,"completed":70,"skipped":1069,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Oct 24 10:16:17.677: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test substitution in container's command
Oct 24 10:16:17.922: INFO: Waiting up to 5m0s for pod "var-expansion-661f1542-37ac-4421-bac8-7b4acf03b971" in namespace "var-expansion-3238" to be "Succeeded or Failed"
Oct 24 10:16:17.961: INFO: Pod "var-expansion-661f1542-37ac-4421-bac8-7b4acf03b971": Phase="Pending", Reason="", readiness=false. Elapsed: 38.862104ms
Oct 24 10:16:20.000: INFO: Pod "var-expansion-661f1542-37ac-4421-bac8-7b4acf03b971": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.078301398s
STEP: Saw pod success
Oct 24 10:16:20.000: INFO: Pod "var-expansion-661f1542-37ac-4421-bac8-7b4acf03b971" satisfied condition "Succeeded or Failed"
Oct 24 10:16:20.039: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod var-expansion-661f1542-37ac-4421-bac8-7b4acf03b971 container dapi-container: <nil>
STEP: delete the pod
Oct 24 10:16:20.149: INFO: Waiting for pod var-expansion-661f1542-37ac-4421-bac8-7b4acf03b971 to disappear
Oct 24 10:16:20.190: INFO: Pod var-expansion-661f1542-37ac-4421-bac8-7b4acf03b971 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:16:20.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3238" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":306,"completed":71,"skipped":1113,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 24 10:16:20.520: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82889231-f8e6-427d-9214-b94891bb0360" in namespace "projected-4328" to be "Succeeded or Failed"
Oct 24 10:16:20.561: INFO: Pod "downwardapi-volume-82889231-f8e6-427d-9214-b94891bb0360": Phase="Pending", Reason="", readiness=false. Elapsed: 41.355226ms
Oct 24 10:16:22.636: INFO: Pod "downwardapi-volume-82889231-f8e6-427d-9214-b94891bb0360": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.116376346s
STEP: Saw pod success
Oct 24 10:16:22.636: INFO: Pod "downwardapi-volume-82889231-f8e6-427d-9214-b94891bb0360" satisfied condition "Succeeded or Failed"
Oct 24 10:16:22.683: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downwardapi-volume-82889231-f8e6-427d-9214-b94891bb0360 container client-container: <nil>
STEP: delete the pod
Oct 24 10:16:22.886: INFO: Waiting for pod downwardapi-volume-82889231-f8e6-427d-9214-b94891bb0360 to disappear
Oct 24 10:16:22.949: INFO: Pod downwardapi-volume-82889231-f8e6-427d-9214-b94891bb0360 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:16:22.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4328" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":306,"completed":72,"skipped":1117,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should test the lifecycle of an Endpoint [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 19 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:16:24.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5408" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":306,"completed":73,"skipped":1131,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-map-d267c18d-e8f2-44e5-834b-62ff73b98853
STEP: Creating a pod to test consume configMaps
Oct 24 10:16:24.604: INFO: Waiting up to 5m0s for pod "pod-configmaps-870b373f-2c8a-4db8-a66b-04b96bc0a44f" in namespace "configmap-8275" to be "Succeeded or Failed"
Oct 24 10:16:24.644: INFO: Pod "pod-configmaps-870b373f-2c8a-4db8-a66b-04b96bc0a44f": Phase="Pending", Reason="", readiness=false. Elapsed: 40.375697ms
Oct 24 10:16:26.685: INFO: Pod "pod-configmaps-870b373f-2c8a-4db8-a66b-04b96bc0a44f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.081253623s
STEP: Saw pod success
Oct 24 10:16:26.685: INFO: Pod "pod-configmaps-870b373f-2c8a-4db8-a66b-04b96bc0a44f" satisfied condition "Succeeded or Failed"
Oct 24 10:16:26.724: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-configmaps-870b373f-2c8a-4db8-a66b-04b96bc0a44f container configmap-volume-test: <nil>
STEP: delete the pod
Oct 24 10:16:26.821: INFO: Waiting for pod pod-configmaps-870b373f-2c8a-4db8-a66b-04b96bc0a44f to disappear
Oct 24 10:16:26.861: INFO: Pod pod-configmaps-870b373f-2c8a-4db8-a66b-04b96bc0a44f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:16:26.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8275" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":306,"completed":74,"skipped":1142,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 24 10:16:27.260: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-4a800cf9-3858-45ef-b8ff-182803956905" in namespace "security-context-test-2681" to be "Succeeded or Failed"
Oct 24 10:16:27.303: INFO: Pod "alpine-nnp-false-4a800cf9-3858-45ef-b8ff-182803956905": Phase="Pending", Reason="", readiness=false. Elapsed: 42.833693ms
Oct 24 10:16:29.410: INFO: Pod "alpine-nnp-false-4a800cf9-3858-45ef-b8ff-182803956905": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149914655s
Oct 24 10:16:31.450: INFO: Pod "alpine-nnp-false-4a800cf9-3858-45ef-b8ff-182803956905": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.189835403s
Oct 24 10:16:31.450: INFO: Pod "alpine-nnp-false-4a800cf9-3858-45ef-b8ff-182803956905" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:16:31.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2681" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":75,"skipped":1206,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 24 10:16:31.584: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0666 on tmpfs
Oct 24 10:16:31.899: INFO: Waiting up to 5m0s for pod "pod-68cf3f5d-4701-4b3d-9eb2-736f4d03eb69" in namespace "emptydir-861" to be "Succeeded or Failed"
Oct 24 10:16:32.057: INFO: Pod "pod-68cf3f5d-4701-4b3d-9eb2-736f4d03eb69": Phase="Pending", Reason="", readiness=false. Elapsed: 158.690014ms
Oct 24 10:16:34.097: INFO: Pod "pod-68cf3f5d-4701-4b3d-9eb2-736f4d03eb69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.198142391s
STEP: Saw pod success
Oct 24 10:16:34.097: INFO: Pod "pod-68cf3f5d-4701-4b3d-9eb2-736f4d03eb69" satisfied condition "Succeeded or Failed"
Oct 24 10:16:34.136: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-68cf3f5d-4701-4b3d-9eb2-736f4d03eb69 container test-container: <nil>
STEP: delete the pod
Oct 24 10:16:34.254: INFO: Waiting for pod pod-68cf3f5d-4701-4b3d-9eb2-736f4d03eb69 to disappear
Oct 24 10:16:34.301: INFO: Pod pod-68cf3f5d-4701-4b3d-9eb2-736f4d03eb69 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:16:34.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-861" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":76,"skipped":1216,"failed":0}
SS
------------------------------
[sig-auth] ServiceAccounts 
  should run through the lifecycle of a ServiceAccount [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 10 lines ...
STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector)
STEP: deleting the ServiceAccount
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:16:35.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8198" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":306,"completed":77,"skipped":1218,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Oct 24 10:16:43.010: INFO: stderr: ""
Oct 24 10:16:43.010: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3963-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t<string>\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t<string>\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t<Object>\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t<map[string]>\n     Specification of Waldo\n\n   status\t<Object>\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:16:47.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9316" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":306,"completed":78,"skipped":1227,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:16:57.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1430" for this suite.
STEP: Destroying namespace "webhook-1430-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":306,"completed":79,"skipped":1294,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 24 10:16:57.786: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:16:58.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2763" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":306,"completed":80,"skipped":1303,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 24 10:16:59.077: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7e6d72f3-cd05-438d-b3ff-2fe7dc5eb7ae" in namespace "projected-8389" to be "Succeeded or Failed"
Oct 24 10:16:59.129: INFO: Pod "downwardapi-volume-7e6d72f3-cd05-438d-b3ff-2fe7dc5eb7ae": Phase="Pending", Reason="", readiness=false. Elapsed: 51.856344ms
Oct 24 10:17:01.217: INFO: Pod "downwardapi-volume-7e6d72f3-cd05-438d-b3ff-2fe7dc5eb7ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.14057507s
STEP: Saw pod success
Oct 24 10:17:01.217: INFO: Pod "downwardapi-volume-7e6d72f3-cd05-438d-b3ff-2fe7dc5eb7ae" satisfied condition "Succeeded or Failed"
Oct 24 10:17:01.257: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downwardapi-volume-7e6d72f3-cd05-438d-b3ff-2fe7dc5eb7ae container client-container: <nil>
STEP: delete the pod
Oct 24 10:17:01.347: INFO: Waiting for pod downwardapi-volume-7e6d72f3-cd05-438d-b3ff-2fe7dc5eb7ae to disappear
Oct 24 10:17:01.386: INFO: Pod downwardapi-volume-7e6d72f3-cd05-438d-b3ff-2fe7dc5eb7ae no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:17:01.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8389" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":81,"skipped":1307,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:17:08.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5723" for this suite.
STEP: Destroying namespace "webhook-5723-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":306,"completed":82,"skipped":1333,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:17:11.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4954" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":306,"completed":83,"skipped":1336,"failed":0}
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should be possible to delete [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:17:12.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7161" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":306,"completed":84,"skipped":1339,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:17:17.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8646" for this suite.
STEP: Destroying namespace "webhook-8646-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":306,"completed":85,"skipped":1358,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 24 10:17:17.903: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:17:19.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4440" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":306,"completed":86,"skipped":1362,"failed":0}
S
------------------------------
[k8s.io] Variable Expansion 
  should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
... skipping 19 lines ...
Oct 24 10:17:25.541: INFO: Deleting pod "var-expansion-4784c408-e871-491d-83fb-678b1642058b" in namespace "var-expansion-8697"
Oct 24 10:17:25.614: INFO: Wait up to 5m0s for pod "var-expansion-4784c408-e871-491d-83fb-678b1642058b" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:18:09.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8697" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":306,"completed":87,"skipped":1363,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 24 10:18:09.782: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name secret-emptykey-test-c241af14-e31a-4f82-baad-20c384d0215e
[AfterEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:18:10.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8427" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":306,"completed":88,"skipped":1375,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-map-9ba8e606-a0c3-4b68-9a8a-686b690f9d69
STEP: Creating a pod to test consume secrets
Oct 24 10:18:10.400: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-04959075-d3e2-440f-b1e7-b45468858bc2" in namespace "projected-2971" to be "Succeeded or Failed"
Oct 24 10:18:10.439: INFO: Pod "pod-projected-secrets-04959075-d3e2-440f-b1e7-b45468858bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 39.398853ms
Oct 24 10:18:12.513: INFO: Pod "pod-projected-secrets-04959075-d3e2-440f-b1e7-b45468858bc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.11353706s
STEP: Saw pod success
Oct 24 10:18:12.513: INFO: Pod "pod-projected-secrets-04959075-d3e2-440f-b1e7-b45468858bc2" satisfied condition "Succeeded or Failed"
Oct 24 10:18:12.565: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-projected-secrets-04959075-d3e2-440f-b1e7-b45468858bc2 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 24 10:18:12.837: INFO: Waiting for pod pod-projected-secrets-04959075-d3e2-440f-b1e7-b45468858bc2 to disappear
Oct 24 10:18:12.878: INFO: Pod pod-projected-secrets-04959075-d3e2-440f-b1e7-b45468858bc2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:18:12.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2971" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":306,"completed":89,"skipped":1388,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 8 lines ...
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:18:20.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2719" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":306,"completed":90,"skipped":1405,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Oct 24 10:18:20.449: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test override command
Oct 24 10:18:20.738: INFO: Waiting up to 5m0s for pod "client-containers-6cd83f2e-5775-4b8a-8cff-690fed25698c" in namespace "containers-3117" to be "Succeeded or Failed"
Oct 24 10:18:20.806: INFO: Pod "client-containers-6cd83f2e-5775-4b8a-8cff-690fed25698c": Phase="Pending", Reason="", readiness=false. Elapsed: 68.4659ms
Oct 24 10:18:22.846: INFO: Pod "client-containers-6cd83f2e-5775-4b8a-8cff-690fed25698c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.108188014s
STEP: Saw pod success
Oct 24 10:18:22.846: INFO: Pod "client-containers-6cd83f2e-5775-4b8a-8cff-690fed25698c" satisfied condition "Succeeded or Failed"
Oct 24 10:18:22.885: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod client-containers-6cd83f2e-5775-4b8a-8cff-690fed25698c container agnhost-container: <nil>
STEP: delete the pod
Oct 24 10:18:22.976: INFO: Waiting for pod client-containers-6cd83f2e-5775-4b8a-8cff-690fed25698c to disappear
Oct 24 10:18:23.015: INFO: Pod client-containers-6cd83f2e-5775-4b8a-8cff-690fed25698c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:18:23.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3117" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":306,"completed":91,"skipped":1441,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 3 lines ...
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 24 10:18:25.576: INFO: Waiting up to 5m0s for pod "client-envvars-040dccd3-ea79-4e63-85c0-09450911e071" in namespace "pods-1526" to be "Succeeded or Failed"
Oct 24 10:18:25.616: INFO: Pod "client-envvars-040dccd3-ea79-4e63-85c0-09450911e071": Phase="Pending", Reason="", readiness=false. Elapsed: 39.863326ms
Oct 24 10:18:27.743: INFO: Pod "client-envvars-040dccd3-ea79-4e63-85c0-09450911e071": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.167198832s
STEP: Saw pod success
Oct 24 10:18:27.743: INFO: Pod "client-envvars-040dccd3-ea79-4e63-85c0-09450911e071" satisfied condition "Succeeded or Failed"
Oct 24 10:18:27.783: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod client-envvars-040dccd3-ea79-4e63-85c0-09450911e071 container env3cont: <nil>
STEP: delete the pod
Oct 24 10:18:27.883: INFO: Waiting for pod client-envvars-040dccd3-ea79-4e63-85c0-09450911e071 to disappear
Oct 24 10:18:27.924: INFO: Pod client-envvars-040dccd3-ea79-4e63-85c0-09450911e071 no longer exists
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:18:27.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1526" for this suite.
•{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":306,"completed":92,"skipped":1448,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:18:34.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4296" for this suite.
STEP: Destroying namespace "webhook-4296-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":306,"completed":93,"skipped":1457,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 190 lines ...
Oct 24 10:19:39.471: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"9628"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:19:39.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3289" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":306,"completed":94,"skipped":1483,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-4957266d-b892-4fba-94ca-a302414a6c11
STEP: Creating a pod to test consume configMaps
Oct 24 10:19:40.069: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-10417ad3-5f65-465e-9ae4-1d34d239da81" in namespace "projected-8018" to be "Succeeded or Failed"
Oct 24 10:19:40.144: INFO: Pod "pod-projected-configmaps-10417ad3-5f65-465e-9ae4-1d34d239da81": Phase="Pending", Reason="", readiness=false. Elapsed: 74.694793ms
Oct 24 10:19:42.184: INFO: Pod "pod-projected-configmaps-10417ad3-5f65-465e-9ae4-1d34d239da81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.11504488s
STEP: Saw pod success
Oct 24 10:19:42.184: INFO: Pod "pod-projected-configmaps-10417ad3-5f65-465e-9ae4-1d34d239da81" satisfied condition "Succeeded or Failed"
Oct 24 10:19:42.224: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-projected-configmaps-10417ad3-5f65-465e-9ae4-1d34d239da81 container agnhost-container: <nil>
STEP: delete the pod
Oct 24 10:19:42.314: INFO: Waiting for pod pod-projected-configmaps-10417ad3-5f65-465e-9ae4-1d34d239da81 to disappear
Oct 24 10:19:42.353: INFO: Pod pod-projected-configmaps-10417ad3-5f65-465e-9ae4-1d34d239da81 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:19:42.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8018" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":306,"completed":95,"skipped":1489,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 24 10:19:42.439: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir volume type on node default medium
Oct 24 10:19:42.680: INFO: Waiting up to 5m0s for pod "pod-5a18f4dd-7d2c-45f0-8040-c107fb6f4e5e" in namespace "emptydir-1185" to be "Succeeded or Failed"
Oct 24 10:19:42.728: INFO: Pod "pod-5a18f4dd-7d2c-45f0-8040-c107fb6f4e5e": Phase="Pending", Reason="", readiness=false. Elapsed: 48.167063ms
Oct 24 10:19:44.771: INFO: Pod "pod-5a18f4dd-7d2c-45f0-8040-c107fb6f4e5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.090507701s
STEP: Saw pod success
Oct 24 10:19:44.771: INFO: Pod "pod-5a18f4dd-7d2c-45f0-8040-c107fb6f4e5e" satisfied condition "Succeeded or Failed"
Oct 24 10:19:44.811: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-5a18f4dd-7d2c-45f0-8040-c107fb6f4e5e container test-container: <nil>
STEP: delete the pod
Oct 24 10:19:44.951: INFO: Waiting for pod pod-5a18f4dd-7d2c-45f0-8040-c107fb6f4e5e to disappear
Oct 24 10:19:44.995: INFO: Pod pod-5a18f4dd-7d2c-45f0-8040-c107fb6f4e5e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:19:44.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1185" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":96,"skipped":1558,"failed":0}

------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 8 lines ...
Oct 24 10:19:45.519: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c06765e8-4254-4e19-98ef-8692ce3810f1", Controller:(*bool)(0xc003d6f8ae), BlockOwnerDeletion:(*bool)(0xc003d6f8af)}}
Oct 24 10:19:45.572: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"1bd85874-a51c-4252-ab03-252c46f6af9a", Controller:(*bool)(0xc003d6fac6), BlockOwnerDeletion:(*bool)(0xc003d6fac7)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:19:50.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5587" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":306,"completed":97,"skipped":1558,"failed":0}
SSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: creating secret secrets-5048/secret-test-eef34c1d-22d5-45e2-a3e5-03dddb2b102b
STEP: Creating a pod to test consume secrets
Oct 24 10:19:51.087: INFO: Waiting up to 5m0s for pod "pod-configmaps-bb2f1aaf-b5dc-45ce-91a3-2687e4ce2e3d" in namespace "secrets-5048" to be "Succeeded or Failed"
Oct 24 10:19:51.152: INFO: Pod "pod-configmaps-bb2f1aaf-b5dc-45ce-91a3-2687e4ce2e3d": Phase="Pending", Reason="", readiness=false. Elapsed: 64.266276ms
Oct 24 10:19:53.191: INFO: Pod "pod-configmaps-bb2f1aaf-b5dc-45ce-91a3-2687e4ce2e3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10425643s
Oct 24 10:19:55.256: INFO: Pod "pod-configmaps-bb2f1aaf-b5dc-45ce-91a3-2687e4ce2e3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.168526082s
STEP: Saw pod success
Oct 24 10:19:55.256: INFO: Pod "pod-configmaps-bb2f1aaf-b5dc-45ce-91a3-2687e4ce2e3d" satisfied condition "Succeeded or Failed"
Oct 24 10:19:55.383: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-configmaps-bb2f1aaf-b5dc-45ce-91a3-2687e4ce2e3d container env-test: <nil>
STEP: delete the pod
Oct 24 10:19:55.724: INFO: Waiting for pod pod-configmaps-bb2f1aaf-b5dc-45ce-91a3-2687e4ce2e3d to disappear
Oct 24 10:19:55.778: INFO: Pod pod-configmaps-bb2f1aaf-b5dc-45ce-91a3-2687e4ce2e3d no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:19:55.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5048" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":306,"completed":98,"skipped":1561,"failed":0}
S
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Aggregator
... skipping 18 lines ...
[AfterEach] [sig-api-machinery] Aggregator
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:20:17.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-5410" for this suite.
•{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":306,"completed":99,"skipped":1562,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Oct 24 10:20:20.109: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:20:20.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9586" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":306,"completed":100,"skipped":1617,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Runtime
... skipping 2 lines ...
Oct 24 10:20:20.286: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Oct 24 10:20:23.047: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:20:23.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-678" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":306,"completed":101,"skipped":1631,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Oct 24 10:20:26.362: INFO: Successfully updated pod "labelsupdate1fa4aa2b-0360-4328-a3cb-a1a2f4fae6d2"
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:20:30.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2999" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":306,"completed":102,"skipped":1699,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 24 10:20:30.840: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8d2c6c7c-504e-4a2f-8b6e-c625aa9f7980" in namespace "downward-api-9296" to be "Succeeded or Failed"
Oct 24 10:20:30.883: INFO: Pod "downwardapi-volume-8d2c6c7c-504e-4a2f-8b6e-c625aa9f7980": Phase="Pending", Reason="", readiness=false. Elapsed: 42.324169ms
Oct 24 10:20:32.923: INFO: Pod "downwardapi-volume-8d2c6c7c-504e-4a2f-8b6e-c625aa9f7980": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.082310718s
STEP: Saw pod success
Oct 24 10:20:32.923: INFO: Pod "downwardapi-volume-8d2c6c7c-504e-4a2f-8b6e-c625aa9f7980" satisfied condition "Succeeded or Failed"
Oct 24 10:20:32.963: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downwardapi-volume-8d2c6c7c-504e-4a2f-8b6e-c625aa9f7980 container client-container: <nil>
STEP: delete the pod
Oct 24 10:20:33.141: INFO: Waiting for pod downwardapi-volume-8d2c6c7c-504e-4a2f-8b6e-c625aa9f7980 to disappear
Oct 24 10:20:33.180: INFO: Pod downwardapi-volume-8d2c6c7c-504e-4a2f-8b6e-c625aa9f7980 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:20:33.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9296" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":306,"completed":103,"skipped":1703,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] Certificates API [Privileged:ClusterAdmin] 
  should support CSR API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
... skipping 25 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:20:35.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "certificates-8424" for this suite.
•{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":306,"completed":104,"skipped":1731,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should run through a ConfigMap lifecycle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] ConfigMap
... skipping 11 lines ...
STEP: deleting the ConfigMap by collection with a label selector
STEP: listing all ConfigMaps in test namespace
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:20:36.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5451" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":306,"completed":105,"skipped":1751,"failed":0}
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Oct 24 10:20:36.390: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test override arguments
Oct 24 10:20:36.633: INFO: Waiting up to 5m0s for pod "client-containers-764157ea-5d8f-4d3e-8f17-795c8e6ffa05" in namespace "containers-302" to be "Succeeded or Failed"
Oct 24 10:20:36.672: INFO: Pod "client-containers-764157ea-5d8f-4d3e-8f17-795c8e6ffa05": Phase="Pending", Reason="", readiness=false. Elapsed: 39.093581ms
Oct 24 10:20:38.712: INFO: Pod "client-containers-764157ea-5d8f-4d3e-8f17-795c8e6ffa05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.079313872s
STEP: Saw pod success
Oct 24 10:20:38.712: INFO: Pod "client-containers-764157ea-5d8f-4d3e-8f17-795c8e6ffa05" satisfied condition "Succeeded or Failed"
Oct 24 10:20:38.751: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod client-containers-764157ea-5d8f-4d3e-8f17-795c8e6ffa05 container agnhost-container: <nil>
STEP: delete the pod
Oct 24 10:20:38.849: INFO: Waiting for pod client-containers-764157ea-5d8f-4d3e-8f17-795c8e6ffa05 to disappear
Oct 24 10:20:38.890: INFO: Pod client-containers-764157ea-5d8f-4d3e-8f17-795c8e6ffa05 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:20:38.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-302" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":306,"completed":106,"skipped":1752,"failed":0}
SS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicationController
... skipping 10 lines ...
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:20:41.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-70" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":306,"completed":107,"skipped":1754,"failed":0}

------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Oct 24 10:20:49.503: INFO: stderr: ""
Oct 24 10:20:49.503: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9126-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:20:53.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9412" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":306,"completed":108,"skipped":1754,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 26 lines ...
Oct 24 10:20:56.747: INFO: Selector matched 1 pods for map[app:agnhost]
Oct 24 10:20:56.747: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:20:56.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-830" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":306,"completed":109,"skipped":1760,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Oct 24 10:20:56.831: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test override all
Oct 24 10:20:57.074: INFO: Waiting up to 5m0s for pod "client-containers-85c58a17-8af1-4ebd-8019-7e5ac954ff94" in namespace "containers-4064" to be "Succeeded or Failed"
Oct 24 10:20:57.113: INFO: Pod "client-containers-85c58a17-8af1-4ebd-8019-7e5ac954ff94": Phase="Pending", Reason="", readiness=false. Elapsed: 39.090618ms
Oct 24 10:20:59.158: INFO: Pod "client-containers-85c58a17-8af1-4ebd-8019-7e5ac954ff94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.084186968s
STEP: Saw pod success
Oct 24 10:20:59.158: INFO: Pod "client-containers-85c58a17-8af1-4ebd-8019-7e5ac954ff94" satisfied condition "Succeeded or Failed"
Oct 24 10:20:59.222: INFO: Trying to get logs from node bootstrap-e2e-minion-group-g27b pod client-containers-85c58a17-8af1-4ebd-8019-7e5ac954ff94 container agnhost-container: <nil>
STEP: delete the pod
Oct 24 10:20:59.433: INFO: Waiting for pod client-containers-85c58a17-8af1-4ebd-8019-7e5ac954ff94 to disappear
Oct 24 10:20:59.477: INFO: Pod client-containers-85c58a17-8af1-4ebd-8019-7e5ac954ff94 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:20:59.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4064" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":306,"completed":110,"skipped":1819,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl diff 
  should check if kubectl diff finds a difference for Deployments [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 17 lines ...
Oct 24 10:21:01.889: INFO: stderr: ""
Oct 24 10:21:01.889: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:21:01.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1655" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":306,"completed":111,"skipped":1829,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 24 10:21:01.976: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct 24 10:21:02.242: INFO: Waiting up to 5m0s for pod "pod-ce3ab6b4-1367-405c-ac08-5f76999f2448" in namespace "emptydir-110" to be "Succeeded or Failed"
Oct 24 10:21:02.284: INFO: Pod "pod-ce3ab6b4-1367-405c-ac08-5f76999f2448": Phase="Pending", Reason="", readiness=false. Elapsed: 42.651436ms
Oct 24 10:21:04.363: INFO: Pod "pod-ce3ab6b4-1367-405c-ac08-5f76999f2448": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.121108671s
STEP: Saw pod success
Oct 24 10:21:04.363: INFO: Pod "pod-ce3ab6b4-1367-405c-ac08-5f76999f2448" satisfied condition "Succeeded or Failed"
Oct 24 10:21:04.462: INFO: Trying to get logs from node bootstrap-e2e-minion-group-g27b pod pod-ce3ab6b4-1367-405c-ac08-5f76999f2448 container test-container: <nil>
STEP: delete the pod
Oct 24 10:21:04.572: INFO: Waiting for pod pod-ce3ab6b4-1367-405c-ac08-5f76999f2448 to disappear
Oct 24 10:21:04.618: INFO: Pod pod-ce3ab6b4-1367-405c-ac08-5f76999f2448 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:21:04.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-110" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":112,"skipped":1832,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicationController
... skipping 13 lines ...
Oct 24 10:21:07.183: INFO: Trying to dial the pod
Oct 24 10:21:12.303: INFO: Controller my-hostname-basic-94cb5bcf-3a74-4715-9846-b0345ddd00af: Got expected result from replica 1 [my-hostname-basic-94cb5bcf-3a74-4715-9846-b0345ddd00af-ggsjx]: "my-hostname-basic-94cb5bcf-3a74-4715-9846-b0345ddd00af-ggsjx", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:21:12.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-734" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":306,"completed":113,"skipped":1844,"failed":0}
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:21:15.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7476" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":114,"skipped":1851,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should have an terminated reason [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:21:20.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9535" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":306,"completed":115,"skipped":1866,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-map-74614055-0c1c-4320-8281-9f42ee426155
STEP: Creating a pod to test consume secrets
Oct 24 10:21:20.787: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5d53f588-2716-42e9-b448-4dfe13af877b" in namespace "projected-5874" to be "Succeeded or Failed"
Oct 24 10:21:20.837: INFO: Pod "pod-projected-secrets-5d53f588-2716-42e9-b448-4dfe13af877b": Phase="Pending", Reason="", readiness=false. Elapsed: 49.526408ms
Oct 24 10:21:22.884: INFO: Pod "pod-projected-secrets-5d53f588-2716-42e9-b448-4dfe13af877b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.096861636s
STEP: Saw pod success
Oct 24 10:21:22.884: INFO: Pod "pod-projected-secrets-5d53f588-2716-42e9-b448-4dfe13af877b" satisfied condition "Succeeded or Failed"
Oct 24 10:21:22.939: INFO: Trying to get logs from node bootstrap-e2e-minion-group-g27b pod pod-projected-secrets-5d53f588-2716-42e9-b448-4dfe13af877b container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 24 10:21:23.176: INFO: Waiting for pod pod-projected-secrets-5d53f588-2716-42e9-b448-4dfe13af877b to disappear
Oct 24 10:21:23.227: INFO: Pod pod-projected-secrets-5d53f588-2716-42e9-b448-4dfe13af877b no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:21:23.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5874" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":116,"skipped":1873,"failed":0}
SSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:21:40.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2767" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":306,"completed":117,"skipped":1877,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct 24 10:21:46.083: INFO: File wheezy_udp@dns-test-service-3.dns-6627.svc.cluster.local from pod  dns-6627/dns-test-0aa15aa4-6404-4a4c-8936-50b1749a3a2c contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 24 10:21:46.188: INFO: File jessie_udp@dns-test-service-3.dns-6627.svc.cluster.local from pod  dns-6627/dns-test-0aa15aa4-6404-4a4c-8936-50b1749a3a2c contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 24 10:21:46.188: INFO: Lookups using dns-6627/dns-test-0aa15aa4-6404-4a4c-8936-50b1749a3a2c failed for: [wheezy_udp@dns-test-service-3.dns-6627.svc.cluster.local jessie_udp@dns-test-service-3.dns-6627.svc.cluster.local]

Oct 24 10:21:51.257: INFO: File wheezy_udp@dns-test-service-3.dns-6627.svc.cluster.local from pod  dns-6627/dns-test-0aa15aa4-6404-4a4c-8936-50b1749a3a2c contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 24 10:21:51.332: INFO: File jessie_udp@dns-test-service-3.dns-6627.svc.cluster.local from pod  dns-6627/dns-test-0aa15aa4-6404-4a4c-8936-50b1749a3a2c contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 24 10:21:51.332: INFO: Lookups using dns-6627/dns-test-0aa15aa4-6404-4a4c-8936-50b1749a3a2c failed for: [wheezy_udp@dns-test-service-3.dns-6627.svc.cluster.local jessie_udp@dns-test-service-3.dns-6627.svc.cluster.local]

Oct 24 10:21:56.229: INFO: File wheezy_udp@dns-test-service-3.dns-6627.svc.cluster.local from pod  dns-6627/dns-test-0aa15aa4-6404-4a4c-8936-50b1749a3a2c contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 24 10:21:56.271: INFO: File jessie_udp@dns-test-service-3.dns-6627.svc.cluster.local from pod  dns-6627/dns-test-0aa15aa4-6404-4a4c-8936-50b1749a3a2c contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 24 10:21:56.271: INFO: Lookups using dns-6627/dns-test-0aa15aa4-6404-4a4c-8936-50b1749a3a2c failed for: [wheezy_udp@dns-test-service-3.dns-6627.svc.cluster.local jessie_udp@dns-test-service-3.dns-6627.svc.cluster.local]

Oct 24 10:22:01.359: INFO: File wheezy_udp@dns-test-service-3.dns-6627.svc.cluster.local from pod  dns-6627/dns-test-0aa15aa4-6404-4a4c-8936-50b1749a3a2c contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 24 10:22:01.401: INFO: File jessie_udp@dns-test-service-3.dns-6627.svc.cluster.local from pod  dns-6627/dns-test-0aa15aa4-6404-4a4c-8936-50b1749a3a2c contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 24 10:22:01.401: INFO: Lookups using dns-6627/dns-test-0aa15aa4-6404-4a4c-8936-50b1749a3a2c failed for: [wheezy_udp@dns-test-service-3.dns-6627.svc.cluster.local jessie_udp@dns-test-service-3.dns-6627.svc.cluster.local]

Oct 24 10:22:06.264: INFO: File wheezy_udp@dns-test-service-3.dns-6627.svc.cluster.local from pod  dns-6627/dns-test-0aa15aa4-6404-4a4c-8936-50b1749a3a2c contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 24 10:22:06.311: INFO: File jessie_udp@dns-test-service-3.dns-6627.svc.cluster.local from pod  dns-6627/dns-test-0aa15aa4-6404-4a4c-8936-50b1749a3a2c contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 24 10:22:06.311: INFO: Lookups using dns-6627/dns-test-0aa15aa4-6404-4a4c-8936-50b1749a3a2c failed for: [wheezy_udp@dns-test-service-3.dns-6627.svc.cluster.local jessie_udp@dns-test-service-3.dns-6627.svc.cluster.local]

Oct 24 10:22:11.230: INFO: File wheezy_udp@dns-test-service-3.dns-6627.svc.cluster.local from pod  dns-6627/dns-test-0aa15aa4-6404-4a4c-8936-50b1749a3a2c contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 24 10:22:11.272: INFO: File jessie_udp@dns-test-service-3.dns-6627.svc.cluster.local from pod  dns-6627/dns-test-0aa15aa4-6404-4a4c-8936-50b1749a3a2c contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 24 10:22:11.272: INFO: Lookups using dns-6627/dns-test-0aa15aa4-6404-4a4c-8936-50b1749a3a2c failed for: [wheezy_udp@dns-test-service-3.dns-6627.svc.cluster.local jessie_udp@dns-test-service-3.dns-6627.svc.cluster.local]

Oct 24 10:22:16.279: INFO: DNS probes using dns-test-0aa15aa4-6404-4a4c-8936-50b1749a3a2c succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6627.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6627.svc.cluster.local; sleep 1; done
... skipping 9 lines ...
STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:22:20.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6627" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":306,"completed":118,"skipped":1921,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 24 10:22:21.147: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:22:34.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-126" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":306,"completed":119,"skipped":1939,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-682a222f-9171-43cc-b982-52a901c83cb8
STEP: Creating a pod to test consume secrets
Oct 24 10:22:35.878: INFO: Waiting up to 5m0s for pod "pod-secrets-c5925653-90cf-4f8e-9346-7eda06ed55be" in namespace "secrets-8401" to be "Succeeded or Failed"
Oct 24 10:22:36.033: INFO: Pod "pod-secrets-c5925653-90cf-4f8e-9346-7eda06ed55be": Phase="Pending", Reason="", readiness=false. Elapsed: 155.179531ms
Oct 24 10:22:38.072: INFO: Pod "pod-secrets-c5925653-90cf-4f8e-9346-7eda06ed55be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.19493796s
STEP: Saw pod success
Oct 24 10:22:38.073: INFO: Pod "pod-secrets-c5925653-90cf-4f8e-9346-7eda06ed55be" satisfied condition "Succeeded or Failed"
Oct 24 10:22:38.112: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-secrets-c5925653-90cf-4f8e-9346-7eda06ed55be container secret-env-test: <nil>
STEP: delete the pod
Oct 24 10:22:38.248: INFO: Waiting for pod pod-secrets-c5925653-90cf-4f8e-9346-7eda06ed55be to disappear
Oct 24 10:22:38.287: INFO: Pod pod-secrets-c5925653-90cf-4f8e-9346-7eda06ed55be no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:22:38.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8401" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":306,"completed":120,"skipped":1946,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-fbe2d947-f422-448e-a4cb-9ea30886052b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:24:08.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2759" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":121,"skipped":1975,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Deployment
... skipping 56 lines ...
&Pod{ObjectMeta:{webserver-deployment-795d758f88-rsq2b webserver-deployment-795d758f88- deployment-3002  6e015b9e-9c90-41d6-b666-b44f9af98d43 10876 0 2020-10-24 10:24:14 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bc49141d-c7e7-4095-86c0-6be045729bac 0xc00063ab40 0xc00063ab41}] []  [{kube-controller-manager Update v1 2020-10-24 10:24:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49141d-c7e7-4095-86c0-6be045729bac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-24 10:24:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5cnb6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5cnb6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5cnb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-vkx8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:,StartTime:2020-10-24 10:24:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 24 10:24:17.914: INFO: Pod "webserver-deployment-795d758f88-tn7p8" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-tn7p8 webserver-deployment-795d758f88- deployment-3002  6fb1e91e-3267-4b20-a267-7a97b876a449 10939 0 2020-10-24 10:24:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bc49141d-c7e7-4095-86c0-6be045729bac 0xc00063acd0 0xc00063acd1}] []  [{kube-controller-manager Update v1 2020-10-24 10:24:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49141d-c7e7-4095-86c0-6be045729bac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5cnb6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5cnb6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5cnb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-bf58,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 24 10:24:17.914: INFO: Pod "webserver-deployment-795d758f88-vbk5d" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-vbk5d webserver-deployment-795d758f88- deployment-3002  2f677754-6724-441a-9ad3-94465eeba356 10964 0 2020-10-24 10:24:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bc49141d-c7e7-4095-86c0-6be045729bac 0xc00063ae10 0xc00063ae11}] []  [{kube-controller-manager Update v1 2020-10-24 10:24:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49141d-c7e7-4095-86c0-6be045729bac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-24 10:24:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5cnb6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5cnb6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5cnb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-vkx8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:,StartTime:2020-10-24 10:24:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 24 10:24:17.914: INFO: Pod "webserver-deployment-795d758f88-w2dhs" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-w2dhs webserver-deployment-795d758f88- deployment-3002  fd2266fe-3e15-48ca-9f76-1b3228ffcea4 10899 0 2020-10-24 10:24:14 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bc49141d-c7e7-4095-86c0-6be045729bac 0xc00063b730 0xc00063b731}] []  [{kube-controller-manager Update v1 2020-10-24 10:24:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49141d-c7e7-4095-86c0-6be045729bac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-24 10:24:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.3.20\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5cnb6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5cnb6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5cnb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-bf58,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.5,PodIP:10.64.3.20,StartTime:2020-10-24 10:24:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.3.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 24 10:24:17.914: INFO: Pod "webserver-deployment-795d758f88-zvktq" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-zvktq webserver-deployment-795d758f88- deployment-3002  e2bf6786-51ba-4f75-976a-0d77862a01de 10951 0 2020-10-24 10:24:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bc49141d-c7e7-4095-86c0-6be045729bac 0xc00063b8f0 0xc00063b8f1}] []  [{kube-controller-manager Update v1 2020-10-24 10:24:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49141d-c7e7-4095-86c0-6be045729bac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-24 10:24:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5cnb6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5cnb6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5cnb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-bf58,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.5,PodIP:,StartTime:2020-10-24 10:24:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 24 10:24:17.915: INFO: Pod "webserver-deployment-dd94f59b7-2klrh" is not available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-2klrh webserver-deployment-dd94f59b7- deployment-3002  b6d4bf8a-8e4f-4dbf-a095-4cbf600f82d4 10949 0 2020-10-24 10:24:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 39fccdea-7af7-4c2d-b65a-4d8be49bc566 0xc00063ba90 0xc00063ba91}] []  [{kube-controller-manager Update v1 2020-10-24 10:24:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fccdea-7af7-4c2d-b65a-4d8be49bc566\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5cnb6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5cnb6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5cnb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-vkx8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 24 10:24:17.915: INFO: Pod "webserver-deployment-dd94f59b7-2zxzq" is available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-2zxzq webserver-deployment-dd94f59b7- deployment-3002  135c1cab-1ea0-45a5-9b9b-03cadffd658c 10826 0 2020-10-24 10:24:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 39fccdea-7af7-4c2d-b65a-4d8be49bc566 0xc00063bbb0 0xc00063bbb1}] []  [{kube-controller-manager Update v1 2020-10-24 10:24:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fccdea-7af7-4c2d-b65a-4d8be49bc566\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-24 10:24:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.1.94\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5cnb6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5cnb6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5cnb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-vkx8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:10.64.1.94,StartTime:2020-10-24 10:24:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-24 10:24:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3371b0471827eec77ad0898f9af5b7204718f0da232c14f87c92cef828e33781,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.1.94,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 34 lines ...
Oct 24 10:24:17.919: INFO: Pod "webserver-deployment-dd94f59b7-zx8d4" is not available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-zx8d4 webserver-deployment-dd94f59b7- deployment-3002  963e9642-89b8-47a9-a9a7-39860f4311f5 10955 0 2020-10-24 10:24:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 39fccdea-7af7-4c2d-b65a-4d8be49bc566 0xc003e0c3a0 0xc003e0c3a1}] []  [{kube-controller-manager Update v1 2020-10-24 10:24:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39fccdea-7af7-4c2d-b65a-4d8be49bc566\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-24 10:24:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5cnb6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5cnb6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5cnb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-vkx8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 10:24:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:,StartTime:2020-10-24 10:24:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:24:17.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3002" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":306,"completed":122,"skipped":1987,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 45 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:25:29.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8550" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":306,"completed":123,"skipped":2040,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 24 10:25:29.399: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7bef388e-6080-467c-b756-ef7d6a679c01" in namespace "projected-9464" to be "Succeeded or Failed"
Oct 24 10:25:29.438: INFO: Pod "downwardapi-volume-7bef388e-6080-467c-b756-ef7d6a679c01": Phase="Pending", Reason="", readiness=false. Elapsed: 39.107227ms
Oct 24 10:25:31.479: INFO: Pod "downwardapi-volume-7bef388e-6080-467c-b756-ef7d6a679c01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.079641703s
STEP: Saw pod success
Oct 24 10:25:31.479: INFO: Pod "downwardapi-volume-7bef388e-6080-467c-b756-ef7d6a679c01" satisfied condition "Succeeded or Failed"
Oct 24 10:25:31.518: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downwardapi-volume-7bef388e-6080-467c-b756-ef7d6a679c01 container client-container: <nil>
STEP: delete the pod
Oct 24 10:25:31.610: INFO: Waiting for pod downwardapi-volume-7bef388e-6080-467c-b756-ef7d6a679c01 to disappear
Oct 24 10:25:31.649: INFO: Pod downwardapi-volume-7bef388e-6080-467c-b756-ef7d6a679c01 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:25:31.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9464" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":306,"completed":124,"skipped":2086,"failed":0}
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 25 lines ...
Oct 24 10:25:48.532: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Oct 24 10:25:48.573: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:25:48.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6771" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":306,"completed":125,"skipped":2089,"failed":0}

------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] [sig-node] Events
... skipping 16 lines ...
Oct 24 10:25:57.210: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:25:57.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2517" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":306,"completed":126,"skipped":2089,"failed":0}
SS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 17 lines ...
Oct 24 10:28:25.303: INFO: Restart count of pod container-probe-4806/liveness-bcfa9b4c-b408-49ef-8fa9-ccb7cd2fd3f5 is now 5 (2m25.481761481s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:28:25.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4806" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":306,"completed":127,"skipped":2091,"failed":0}
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 156 lines ...
Oct 24 10:29:09.491: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"11958"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:29:09.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2364" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":306,"completed":128,"skipped":2096,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 47 lines ...
Oct 24 10:29:14.617: INFO: stderr: ""
Oct 24 10:29:14.617: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:29:14.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7189" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":306,"completed":129,"skipped":2107,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 24 10:29:14.699: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct 24 10:29:14.932: INFO: Waiting up to 5m0s for pod "pod-a20bd4b1-7da4-404a-8f3d-33ca0bd16092" in namespace "emptydir-1101" to be "Succeeded or Failed"
Oct 24 10:29:14.971: INFO: Pod "pod-a20bd4b1-7da4-404a-8f3d-33ca0bd16092": Phase="Pending", Reason="", readiness=false. Elapsed: 39.011218ms
Oct 24 10:29:17.010: INFO: Pod "pod-a20bd4b1-7da4-404a-8f3d-33ca0bd16092": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.077871156s
STEP: Saw pod success
Oct 24 10:29:17.010: INFO: Pod "pod-a20bd4b1-7da4-404a-8f3d-33ca0bd16092" satisfied condition "Succeeded or Failed"
Oct 24 10:29:17.047: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-a20bd4b1-7da4-404a-8f3d-33ca0bd16092 container test-container: <nil>
STEP: delete the pod
Oct 24 10:29:17.163: INFO: Waiting for pod pod-a20bd4b1-7da4-404a-8f3d-33ca0bd16092 to disappear
Oct 24 10:29:17.202: INFO: Pod pod-a20bd4b1-7da4-404a-8f3d-33ca0bd16092 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:29:17.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1101" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":130,"skipped":2122,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 24 10:29:17.520: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8894be5-0254-413f-b61b-4332594d96b5" in namespace "projected-1294" to be "Succeeded or Failed"
Oct 24 10:29:17.724: INFO: Pod "downwardapi-volume-d8894be5-0254-413f-b61b-4332594d96b5": Phase="Pending", Reason="", readiness=false. Elapsed: 204.249802ms
Oct 24 10:29:19.762: INFO: Pod "downwardapi-volume-d8894be5-0254-413f-b61b-4332594d96b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.242195728s
STEP: Saw pod success
Oct 24 10:29:19.762: INFO: Pod "downwardapi-volume-d8894be5-0254-413f-b61b-4332594d96b5" satisfied condition "Succeeded or Failed"
Oct 24 10:29:19.802: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downwardapi-volume-d8894be5-0254-413f-b61b-4332594d96b5 container client-container: <nil>
STEP: delete the pod
Oct 24 10:29:20.261: INFO: Waiting for pod downwardapi-volume-d8894be5-0254-413f-b61b-4332594d96b5 to disappear
Oct 24 10:29:20.299: INFO: Pod downwardapi-volume-d8894be5-0254-413f-b61b-4332594d96b5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:29:20.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1294" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":306,"completed":131,"skipped":2125,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 24 10:29:20.402: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 24 10:29:20.632: INFO: Waiting up to 5m0s for pod "pod-e76af49e-c979-46c4-b72d-9233bd38cbe4" in namespace "emptydir-762" to be "Succeeded or Failed"
Oct 24 10:29:20.669: INFO: Pod "pod-e76af49e-c979-46c4-b72d-9233bd38cbe4": Phase="Pending", Reason="", readiness=false. Elapsed: 37.212144ms
Oct 24 10:29:22.818: INFO: Pod "pod-e76af49e-c979-46c4-b72d-9233bd38cbe4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186494256s
Oct 24 10:29:24.856: INFO: Pod "pod-e76af49e-c979-46c4-b72d-9233bd38cbe4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.2243127s
STEP: Saw pod success
Oct 24 10:29:24.856: INFO: Pod "pod-e76af49e-c979-46c4-b72d-9233bd38cbe4" satisfied condition "Succeeded or Failed"
Oct 24 10:29:24.894: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-e76af49e-c979-46c4-b72d-9233bd38cbe4 container test-container: <nil>
STEP: delete the pod
Oct 24 10:29:25.000: INFO: Waiting for pod pod-e76af49e-c979-46c4-b72d-9233bd38cbe4 to disappear
Oct 24 10:29:25.039: INFO: Pod pod-e76af49e-c979-46c4-b72d-9233bd38cbe4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:29:25.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-762" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":132,"skipped":2152,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:29:38.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-437" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":306,"completed":133,"skipped":2162,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] IngressClass API 
   should support creating IngressClass API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] IngressClass API
... skipping 21 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] IngressClass API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:29:39.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingressclass-6601" for this suite.
•{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":306,"completed":134,"skipped":2197,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Oct 24 10:29:39.798: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
Oct 24 10:29:40.268: INFO: Waiting up to 5m0s for pod "downward-api-3886801e-eeff-450a-a34d-c4a13bb2052c" in namespace "downward-api-3269" to be "Succeeded or Failed"
Oct 24 10:29:40.342: INFO: Pod "downward-api-3886801e-eeff-450a-a34d-c4a13bb2052c": Phase="Pending", Reason="", readiness=false. Elapsed: 74.298664ms
Oct 24 10:29:42.382: INFO: Pod "downward-api-3886801e-eeff-450a-a34d-c4a13bb2052c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.114368984s
STEP: Saw pod success
Oct 24 10:29:42.382: INFO: Pod "downward-api-3886801e-eeff-450a-a34d-c4a13bb2052c" satisfied condition "Succeeded or Failed"
Oct 24 10:29:42.420: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downward-api-3886801e-eeff-450a-a34d-c4a13bb2052c container dapi-container: <nil>
STEP: delete the pod
Oct 24 10:29:42.521: INFO: Waiting for pod downward-api-3886801e-eeff-450a-a34d-c4a13bb2052c to disappear
Oct 24 10:29:42.558: INFO: Pod downward-api-3886801e-eeff-450a-a34d-c4a13bb2052c no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:29:42.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3269" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":306,"completed":135,"skipped":2218,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-d9e17850-02da-4ec5-bd0c-9819895e125e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:29:49.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-662" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":136,"skipped":2236,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 34 lines ...
Oct 24 10:29:55.425: INFO: stdout: "service/rm3 exposed\n"
Oct 24 10:29:55.463: INFO: Service rm3 in namespace kubectl-1078 found.
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:29:57.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1078" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":306,"completed":137,"skipped":2237,"failed":0}

------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] StatefulSet
... skipping 85 lines ...
Oct 24 10:31:04.020: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Oct 24 10:31:04.020: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Oct 24 10:31:04.020: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Oct 24 10:31:04.020: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:31:04.524: INFO: rc: 1
Oct 24 10:31:04.524: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: Internal error occurred: error executing command in container: failed to exec in container: container is in CONTAINER_EXITED state

error:
exit status 1
Oct 24 10:31:14.524: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:31:14.756: INFO: rc: 1
Oct 24 10:31:14.756: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:31:24.756: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:31:25.009: INFO: rc: 1
Oct 24 10:31:25.009: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:31:35.009: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:31:35.323: INFO: rc: 1
Oct 24 10:31:35.323: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:31:45.323: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:31:45.558: INFO: rc: 1
Oct 24 10:31:45.558: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:31:55.558: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:31:55.788: INFO: rc: 1
Oct 24 10:31:55.788: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:32:05.788: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:32:06.042: INFO: rc: 1
Oct 24 10:32:06.042: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:32:16.042: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:32:16.401: INFO: rc: 1
Oct 24 10:32:16.401: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:32:26.402: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:32:26.655: INFO: rc: 1
Oct 24 10:32:26.655: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:32:36.655: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:32:36.888: INFO: rc: 1
Oct 24 10:32:36.889: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:32:46.889: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:32:47.159: INFO: rc: 1
Oct 24 10:32:47.159: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:32:57.159: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:32:57.393: INFO: rc: 1
Oct 24 10:32:57.393: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:33:07.393: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:33:07.650: INFO: rc: 1
Oct 24 10:33:07.650: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:33:17.651: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:33:17.917: INFO: rc: 1
Oct 24 10:33:17.917: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:33:27.917: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:33:28.156: INFO: rc: 1
Oct 24 10:33:28.156: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:33:38.156: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:33:38.388: INFO: rc: 1
Oct 24 10:33:38.388: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:33:48.389: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:33:48.625: INFO: rc: 1
Oct 24 10:33:48.625: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:33:58.625: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:33:58.861: INFO: rc: 1
Oct 24 10:33:58.862: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:34:08.862: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:34:09.095: INFO: rc: 1
Oct 24 10:34:09.095: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:34:19.095: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:34:19.373: INFO: rc: 1
Oct 24 10:34:19.374: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:34:29.374: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:34:29.610: INFO: rc: 1
Oct 24 10:34:29.610: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:34:39.610: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:34:39.843: INFO: rc: 1
Oct 24 10:34:39.843: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:34:49.844: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:34:50.089: INFO: rc: 1
Oct 24 10:34:50.089: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:35:00.090: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:35:00.323: INFO: rc: 1
Oct 24 10:35:00.323: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:35:10.323: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:35:10.558: INFO: rc: 1
Oct 24 10:35:10.558: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:35:20.558: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:35:20.839: INFO: rc: 1
Oct 24 10:35:20.839: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:35:30.840: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:35:31.068: INFO: rc: 1
Oct 24 10:35:31.068: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:35:41.069: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:35:41.416: INFO: rc: 1
Oct 24 10:35:41.416: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:35:51.417: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:35:51.694: INFO: rc: 1
Oct 24 10:35:51.694: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:36:01.695: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:36:01.925: INFO: rc: 1
Oct 24 10:36:01.925: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Oct 24 10:36:11.926: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct 24 10:36:12.164: INFO: rc: 1
Oct 24 10:36:12.164: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Oct 24 10:36:12.164: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
... skipping 13 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":306,"completed":138,"skipped":2237,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 24 10:36:12.758: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:36:19.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4631" for this suite.
•{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":306,"completed":139,"skipped":2245,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes control plane services is included in cluster-info  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Oct 24 10:36:19.562: INFO: stderr: ""
Oct 24 10:36:19.562: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://34.105.36.219\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:36:19.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8636" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":306,"completed":140,"skipped":2258,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
... skipping 42 lines ...
Oct 24 10:36:27.344: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 24 10:36:27.616: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:36:27.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-276" for this suite.
•{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":141,"skipped":2290,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:36:48.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3915" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":306,"completed":142,"skipped":2302,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Oct 24 10:36:48.486: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
Oct 24 10:36:48.719: INFO: Waiting up to 5m0s for pod "downward-api-9121a7b2-c7a9-4e7e-981e-6a053ad66ae5" in namespace "downward-api-7450" to be "Succeeded or Failed"
Oct 24 10:36:48.912: INFO: Pod "downward-api-9121a7b2-c7a9-4e7e-981e-6a053ad66ae5": Phase="Pending", Reason="", readiness=false. Elapsed: 193.257507ms
Oct 24 10:36:50.980: INFO: Pod "downward-api-9121a7b2-c7a9-4e7e-981e-6a053ad66ae5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.260557756s
STEP: Saw pod success
Oct 24 10:36:50.980: INFO: Pod "downward-api-9121a7b2-c7a9-4e7e-981e-6a053ad66ae5" satisfied condition "Succeeded or Failed"
Oct 24 10:36:51.017: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downward-api-9121a7b2-c7a9-4e7e-981e-6a053ad66ae5 container dapi-container: <nil>
STEP: delete the pod
Oct 24 10:36:51.226: INFO: Waiting for pod downward-api-9121a7b2-c7a9-4e7e-981e-6a053ad66ae5 to disappear
Oct 24 10:36:51.264: INFO: Pod downward-api-9121a7b2-c7a9-4e7e-981e-6a053ad66ae5 no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:36:51.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7450" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":306,"completed":143,"skipped":2344,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Job
... skipping 19 lines ...
Oct 24 10:36:57.063: INFO: Pod "adopt-release-lgwpj": Phase="Running", Reason="", readiness=true. Elapsed: 75.727178ms
Oct 24 10:36:57.063: INFO: Pod "adopt-release-lgwpj" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:36:57.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7307" for this suite.
•{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":306,"completed":144,"skipped":2377,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Oct 24 10:37:02.077: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:02.117: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:02.236: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:02.279: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:02.317: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:02.358: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:02.438: INFO: Lookups using dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5698.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5698.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local jessie_udp@dns-test-service-2.dns-5698.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5698.svc.cluster.local]

Oct 24 10:37:07.483: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:07.530: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:07.588: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:07.663: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:07.827: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:07.870: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:07.911: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:07.951: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:08.040: INFO: Lookups using dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5698.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5698.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local jessie_udp@dns-test-service-2.dns-5698.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5698.svc.cluster.local]

Oct 24 10:37:12.477: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:12.517: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:12.557: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:12.632: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:12.870: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:12.933: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:12.987: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:13.050: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:13.209: INFO: Lookups using dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5698.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5698.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local jessie_udp@dns-test-service-2.dns-5698.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5698.svc.cluster.local]

Oct 24 10:37:17.478: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:17.518: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:17.557: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:17.597: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:17.715: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:17.755: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:17.793: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:17.834: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:17.951: INFO: Lookups using dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5698.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5698.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local jessie_udp@dns-test-service-2.dns-5698.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5698.svc.cluster.local]

Oct 24 10:37:22.478: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:22.516: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:22.557: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:22.596: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:22.715: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:22.753: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:22.793: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:22.833: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:22.915: INFO: Lookups using dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5698.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5698.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local jessie_udp@dns-test-service-2.dns-5698.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5698.svc.cluster.local]

Oct 24 10:37:27.486: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:27.535: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:27.580: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:27.635: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:27.784: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:27.833: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:27.878: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:27.925: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5698.svc.cluster.local from pod dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10: the server could not find the requested resource (get pods dns-test-80102634-9dc6-4d39-951f-ce1a05963d10)
Oct 24 10:37:28.011: INFO: Lookups using dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5698.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5698.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5698.svc.cluster.local jessie_udp@dns-test-service-2.dns-5698.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5698.svc.cluster.local]

Oct 24 10:37:32.919: INFO: DNS probes using dns-5698/dns-test-80102634-9dc6-4d39-951f-ce1a05963d10 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:37:33.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5698" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":306,"completed":145,"skipped":2388,"failed":0}

------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 24 10:37:33.141: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: creating the pod
Oct 24 10:37:33.334: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:37:39.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7249" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":306,"completed":146,"skipped":2388,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Oct 24 10:37:46.168: INFO: stderr: ""
Oct 24 10:37:46.168: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1787-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:37:50.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9716" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":306,"completed":147,"skipped":2443,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:37:53.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7661" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":306,"completed":148,"skipped":2448,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 54 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:38:05.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5056" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":306,"completed":149,"skipped":2471,"failed":0}
SSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
Oct 24 10:38:16.868: INFO: Unable to read jessie_udp@dns-test-service.dns-3823 from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:16.907: INFO: Unable to read jessie_tcp@dns-test-service.dns-3823 from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:16.949: INFO: Unable to read jessie_udp@dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:16.988: INFO: Unable to read jessie_tcp@dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:17.034: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:17.073: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:17.314: INFO: Lookups using dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3823 wheezy_tcp@dns-test-service.dns-3823 wheezy_udp@dns-test-service.dns-3823.svc wheezy_tcp@dns-test-service.dns-3823.svc wheezy_udp@_http._tcp.dns-test-service.dns-3823.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3823.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3823 jessie_tcp@dns-test-service.dns-3823 jessie_udp@dns-test-service.dns-3823.svc jessie_tcp@dns-test-service.dns-3823.svc jessie_udp@_http._tcp.dns-test-service.dns-3823.svc jessie_tcp@_http._tcp.dns-test-service.dns-3823.svc]

Oct 24 10:38:22.381: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:22.444: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:22.500: INFO: Unable to read wheezy_udp@dns-test-service.dns-3823 from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:22.566: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3823 from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:22.621: INFO: Unable to read wheezy_udp@dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
... skipping 5 lines ...
Oct 24 10:38:23.198: INFO: Unable to read jessie_udp@dns-test-service.dns-3823 from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:23.240: INFO: Unable to read jessie_tcp@dns-test-service.dns-3823 from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:23.280: INFO: Unable to read jessie_udp@dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:23.320: INFO: Unable to read jessie_tcp@dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:23.360: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:23.400: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:23.639: INFO: Lookups using dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3823 wheezy_tcp@dns-test-service.dns-3823 wheezy_udp@dns-test-service.dns-3823.svc wheezy_tcp@dns-test-service.dns-3823.svc wheezy_udp@_http._tcp.dns-test-service.dns-3823.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3823.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3823 jessie_tcp@dns-test-service.dns-3823 jessie_udp@dns-test-service.dns-3823.svc jessie_tcp@dns-test-service.dns-3823.svc jessie_udp@_http._tcp.dns-test-service.dns-3823.svc jessie_tcp@_http._tcp.dns-test-service.dns-3823.svc]

Oct 24 10:38:27.354: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:27.393: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:27.433: INFO: Unable to read wheezy_udp@dns-test-service.dns-3823 from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:27.471: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3823 from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:27.509: INFO: Unable to read wheezy_udp@dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
... skipping 5 lines ...
Oct 24 10:38:28.122: INFO: Unable to read jessie_udp@dns-test-service.dns-3823 from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:28.228: INFO: Unable to read jessie_tcp@dns-test-service.dns-3823 from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:28.364: INFO: Unable to read jessie_udp@dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:28.429: INFO: Unable to read jessie_tcp@dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:28.489: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:28.559: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:29.028: INFO: Lookups using dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3823 wheezy_tcp@dns-test-service.dns-3823 wheezy_udp@dns-test-service.dns-3823.svc wheezy_tcp@dns-test-service.dns-3823.svc wheezy_udp@_http._tcp.dns-test-service.dns-3823.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3823.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3823 jessie_tcp@dns-test-service.dns-3823 jessie_udp@dns-test-service.dns-3823.svc jessie_tcp@dns-test-service.dns-3823.svc jessie_udp@_http._tcp.dns-test-service.dns-3823.svc jessie_tcp@_http._tcp.dns-test-service.dns-3823.svc]

Oct 24 10:38:32.355: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:32.395: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:32.436: INFO: Unable to read wheezy_udp@dns-test-service.dns-3823 from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:32.475: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3823 from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:32.515: INFO: Unable to read wheezy_udp@dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
... skipping 5 lines ...
Oct 24 10:38:33.007: INFO: Unable to read jessie_udp@dns-test-service.dns-3823 from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:33.050: INFO: Unable to read jessie_tcp@dns-test-service.dns-3823 from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:33.093: INFO: Unable to read jessie_udp@dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:33.134: INFO: Unable to read jessie_tcp@dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:33.175: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:33.216: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:33.464: INFO: Lookups using dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3823 wheezy_tcp@dns-test-service.dns-3823 wheezy_udp@dns-test-service.dns-3823.svc wheezy_tcp@dns-test-service.dns-3823.svc wheezy_udp@_http._tcp.dns-test-service.dns-3823.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3823.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3823 jessie_tcp@dns-test-service.dns-3823 jessie_udp@dns-test-service.dns-3823.svc jessie_tcp@dns-test-service.dns-3823.svc jessie_udp@_http._tcp.dns-test-service.dns-3823.svc jessie_tcp@_http._tcp.dns-test-service.dns-3823.svc]

Oct 24 10:38:37.355: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:37.393: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:37.433: INFO: Unable to read wheezy_udp@dns-test-service.dns-3823 from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:37.471: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3823 from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:37.511: INFO: Unable to read wheezy_udp@dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:37.551: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:37.590: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:37.629: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:38.153: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3823.svc from pod dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c: the server could not find the requested resource (get pods dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c)
Oct 24 10:38:38.435: INFO: Lookups using dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3823 wheezy_tcp@dns-test-service.dns-3823 wheezy_udp@dns-test-service.dns-3823.svc wheezy_tcp@dns-test-service.dns-3823.svc wheezy_udp@_http._tcp.dns-test-service.dns-3823.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3823.svc jessie_udp@_http._tcp.dns-test-service.dns-3823.svc]

Oct 24 10:38:43.752: INFO: DNS probes using dns-3823/dns-test-f7fddc97-c16a-41b7-bcc8-e918dfae824c succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:38:43.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3823" for this suite.
•{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":306,"completed":150,"skipped":2475,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 24 10:38:44.261: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c5828c1-5142-4a3c-90a3-0355e2857ac3" in namespace "downward-api-1036" to be "Succeeded or Failed"
Oct 24 10:38:44.299: INFO: Pod "downwardapi-volume-7c5828c1-5142-4a3c-90a3-0355e2857ac3": Phase="Pending", Reason="", readiness=false. Elapsed: 37.867895ms
Oct 24 10:38:46.336: INFO: Pod "downwardapi-volume-7c5828c1-5142-4a3c-90a3-0355e2857ac3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.075315204s
STEP: Saw pod success
Oct 24 10:38:46.336: INFO: Pod "downwardapi-volume-7c5828c1-5142-4a3c-90a3-0355e2857ac3" satisfied condition "Succeeded or Failed"
Oct 24 10:38:46.374: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downwardapi-volume-7c5828c1-5142-4a3c-90a3-0355e2857ac3 container client-container: <nil>
STEP: delete the pod
Oct 24 10:38:46.470: INFO: Waiting for pod downwardapi-volume-7c5828c1-5142-4a3c-90a3-0355e2857ac3 to disappear
Oct 24 10:38:46.507: INFO: Pod downwardapi-volume-7c5828c1-5142-4a3c-90a3-0355e2857ac3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:38:46.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1036" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":306,"completed":151,"skipped":2489,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-map-d9ff1233-b482-40e6-adb8-0bcf3c349ab3
STEP: Creating a pod to test consume configMaps
Oct 24 10:38:46.855: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4df142c7-14d4-4927-a028-9d2216f74e21" in namespace "projected-9045" to be "Succeeded or Failed"
Oct 24 10:38:46.892: INFO: Pod "pod-projected-configmaps-4df142c7-14d4-4927-a028-9d2216f74e21": Phase="Pending", Reason="", readiness=false. Elapsed: 37.028588ms
Oct 24 10:38:48.932: INFO: Pod "pod-projected-configmaps-4df142c7-14d4-4927-a028-9d2216f74e21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.077585087s
STEP: Saw pod success
Oct 24 10:38:48.933: INFO: Pod "pod-projected-configmaps-4df142c7-14d4-4927-a028-9d2216f74e21" satisfied condition "Succeeded or Failed"
Oct 24 10:38:48.970: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-projected-configmaps-4df142c7-14d4-4927-a028-9d2216f74e21 container agnhost-container: <nil>
STEP: delete the pod
Oct 24 10:38:49.160: INFO: Waiting for pod pod-projected-configmaps-4df142c7-14d4-4927-a028-9d2216f74e21 to disappear
Oct 24 10:38:49.197: INFO: Pod pod-projected-configmaps-4df142c7-14d4-4927-a028-9d2216f74e21 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:38:49.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9045" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":306,"completed":152,"skipped":2491,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:38:55.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5430" for this suite.
STEP: Destroying namespace "webhook-5430-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":306,"completed":153,"skipped":2531,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 24 10:38:56.139: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct 24 10:38:56.366: INFO: Waiting up to 5m0s for pod "pod-6cd54e73-59af-4c3b-a651-723300aeefcf" in namespace "emptydir-3178" to be "Succeeded or Failed"
Oct 24 10:38:56.409: INFO: Pod "pod-6cd54e73-59af-4c3b-a651-723300aeefcf": Phase="Pending", Reason="", readiness=false. Elapsed: 42.90498ms
Oct 24 10:38:58.448: INFO: Pod "pod-6cd54e73-59af-4c3b-a651-723300aeefcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.081536897s
STEP: Saw pod success
Oct 24 10:38:58.448: INFO: Pod "pod-6cd54e73-59af-4c3b-a651-723300aeefcf" satisfied condition "Succeeded or Failed"
Oct 24 10:38:58.486: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-6cd54e73-59af-4c3b-a651-723300aeefcf container test-container: <nil>
STEP: delete the pod
Oct 24 10:38:58.804: INFO: Waiting for pod pod-6cd54e73-59af-4c3b-a651-723300aeefcf to disappear
Oct 24 10:38:58.841: INFO: Pod pod-6cd54e73-59af-4c3b-a651-723300aeefcf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:38:58.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3178" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":154,"skipped":2547,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicationController
... skipping 11 lines ...
Oct 24 10:38:59.218: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:38:59.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-228" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":306,"completed":155,"skipped":2559,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] server version 
  should find the server version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] server version
... skipping 11 lines ...
Oct 24 10:38:59.717: INFO: cleanMinorVersion: 20
Oct 24 10:38:59.717: INFO: Minor version: 20+
[AfterEach] [sig-api-machinery] server version
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:38:59.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-9427" for this suite.
•{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":306,"completed":156,"skipped":2573,"failed":0}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 24 10:38:59.796: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129
[It] should retry creating failed daemon pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Oct 24 10:39:00.824: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Oct 24 10:39:00.919: INFO: Number of nodes with available pods: 0
Oct 24 10:39:00.920: INFO: Node bootstrap-e2e-minion-group-bf58 is running more than one daemon pod
... skipping 3 lines ...
Oct 24 10:39:02.960: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Oct 24 10:39:03.005: INFO: Number of nodes with available pods: 1
Oct 24 10:39:03.005: INFO: Node bootstrap-e2e-minion-group-bf58 is running more than one daemon pod
Oct 24 10:39:03.982: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Oct 24 10:39:04.023: INFO: Number of nodes with available pods: 3
Oct 24 10:39:04.023: INFO: Number of running nodes: 3, number of available pods: 3
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Oct 24 10:39:04.190: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Oct 24 10:39:04.265: INFO: Number of nodes with available pods: 2
Oct 24 10:39:04.265: INFO: Node bootstrap-e2e-minion-group-bf58 is running more than one daemon pod
Oct 24 10:39:05.377: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Oct 24 10:39:05.453: INFO: Number of nodes with available pods: 2
Oct 24 10:39:05.453: INFO: Node bootstrap-e2e-minion-group-bf58 is running more than one daemon pod
Oct 24 10:39:06.375: INFO: DaemonSet pods can't tolerate node bootstrap-e2e-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Oct 24 10:39:06.524: INFO: Number of nodes with available pods: 3
Oct 24 10:39:06.524: INFO: Number of running nodes: 3, number of available pods: 3
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3207, will wait for the garbage collector to delete the pods
Oct 24 10:39:07.006: INFO: Deleting DaemonSet.extensions daemon-set took: 145.888715ms
Oct 24 10:39:07.206: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.324526ms
... skipping 4 lines ...
Oct 24 10:39:19.784: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"14086"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:39:19.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3207" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":306,"completed":157,"skipped":2575,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:39:36.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4076" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":306,"completed":158,"skipped":2577,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 7 lines ...
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:40:37.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5885" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":306,"completed":159,"skipped":2590,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-d4247a2a-888a-4d30-a2dc-1667a8109451
STEP: Creating a pod to test consume secrets
Oct 24 10:40:37.948: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f6298410-1ff5-45a2-bc83-16e50abf3d13" in namespace "projected-539" to be "Succeeded or Failed"
Oct 24 10:40:38.018: INFO: Pod "pod-projected-secrets-f6298410-1ff5-45a2-bc83-16e50abf3d13": Phase="Pending", Reason="", readiness=false. Elapsed: 69.902407ms
Oct 24 10:40:40.111: INFO: Pod "pod-projected-secrets-f6298410-1ff5-45a2-bc83-16e50abf3d13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.163242991s
STEP: Saw pod success
Oct 24 10:40:40.111: INFO: Pod "pod-projected-secrets-f6298410-1ff5-45a2-bc83-16e50abf3d13" satisfied condition "Succeeded or Failed"
Oct 24 10:40:40.149: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-projected-secrets-f6298410-1ff5-45a2-bc83-16e50abf3d13 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 24 10:40:40.254: INFO: Waiting for pod pod-projected-secrets-f6298410-1ff5-45a2-bc83-16e50abf3d13 to disappear
Oct 24 10:40:40.291: INFO: Pod pod-projected-secrets-f6298410-1ff5-45a2-bc83-16e50abf3d13 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:40:40.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-539" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":160,"skipped":2624,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 24 10:40:40.371: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 24 10:40:40.616: INFO: Waiting up to 5m0s for pod "pod-ffcc20b7-ad12-4fca-8d90-045aa38abd92" in namespace "emptydir-8064" to be "Succeeded or Failed"
Oct 24 10:40:40.653: INFO: Pod "pod-ffcc20b7-ad12-4fca-8d90-045aa38abd92": Phase="Pending", Reason="", readiness=false. Elapsed: 37.095257ms
Oct 24 10:40:42.691: INFO: Pod "pod-ffcc20b7-ad12-4fca-8d90-045aa38abd92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.075001411s
STEP: Saw pod success
Oct 24 10:40:42.691: INFO: Pod "pod-ffcc20b7-ad12-4fca-8d90-045aa38abd92" satisfied condition "Succeeded or Failed"
Oct 24 10:40:42.728: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-ffcc20b7-ad12-4fca-8d90-045aa38abd92 container test-container: <nil>
STEP: delete the pod
Oct 24 10:40:42.814: INFO: Waiting for pod pod-ffcc20b7-ad12-4fca-8d90-045aa38abd92 to disappear
Oct 24 10:40:42.851: INFO: Pod pod-ffcc20b7-ad12-4fca-8d90-045aa38abd92 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:40:42.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8064" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":161,"skipped":2651,"failed":0}
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:40:45.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-900" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":162,"skipped":2657,"failed":0}
SSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicationController
... skipping 13 lines ...
Oct 24 10:40:47.169: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:40:47.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1908" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":306,"completed":163,"skipped":2660,"failed":0}

------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 24 10:40:47.289: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on node default medium
Oct 24 10:40:47.570: INFO: Waiting up to 5m0s for pod "pod-1adcb477-caaf-4fb8-be97-c2e7c6211342" in namespace "emptydir-4918" to be "Succeeded or Failed"
Oct 24 10:40:47.607: INFO: Pod "pod-1adcb477-caaf-4fb8-be97-c2e7c6211342": Phase="Pending", Reason="", readiness=false. Elapsed: 37.474481ms
Oct 24 10:40:49.646: INFO: Pod "pod-1adcb477-caaf-4fb8-be97-c2e7c6211342": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.075760194s
STEP: Saw pod success
Oct 24 10:40:49.646: INFO: Pod "pod-1adcb477-caaf-4fb8-be97-c2e7c6211342" satisfied condition "Succeeded or Failed"
Oct 24 10:40:49.683: INFO: Trying to get logs from node bootstrap-e2e-minion-group-g27b pod pod-1adcb477-caaf-4fb8-be97-c2e7c6211342 container test-container: <nil>
STEP: delete the pod
Oct 24 10:40:49.799: INFO: Waiting for pod pod-1adcb477-caaf-4fb8-be97-c2e7c6211342 to disappear
Oct 24 10:40:49.836: INFO: Pod pod-1adcb477-caaf-4fb8-be97-c2e7c6211342 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:40:49.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4918" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":164,"skipped":2660,"failed":0}

------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 25 lines ...
Oct 24 10:41:09.026: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct 24 10:41:09.073: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:41:09.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9897" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":306,"completed":165,"skipped":2660,"failed":0}
SSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap configmap-114/configmap-test-2813b970-c919-4659-aa63-2f857314a6f5
STEP: Creating a pod to test consume configMaps
Oct 24 10:41:09.655: INFO: Waiting up to 5m0s for pod "pod-configmaps-f745ba78-4d03-431d-b464-39cfc52d5b2f" in namespace "configmap-114" to be "Succeeded or Failed"
Oct 24 10:41:09.692: INFO: Pod "pod-configmaps-f745ba78-4d03-431d-b464-39cfc52d5b2f": Phase="Pending", Reason="", readiness=false. Elapsed: 37.007247ms
Oct 24 10:41:11.734: INFO: Pod "pod-configmaps-f745ba78-4d03-431d-b464-39cfc52d5b2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.079469776s
STEP: Saw pod success
Oct 24 10:41:11.734: INFO: Pod "pod-configmaps-f745ba78-4d03-431d-b464-39cfc52d5b2f" satisfied condition "Succeeded or Failed"
Oct 24 10:41:11.775: INFO: Trying to get logs from node bootstrap-e2e-minion-group-g27b pod pod-configmaps-f745ba78-4d03-431d-b464-39cfc52d5b2f container env-test: <nil>
STEP: delete the pod
Oct 24 10:41:11.866: INFO: Waiting for pod pod-configmaps-f745ba78-4d03-431d-b464-39cfc52d5b2f to disappear
Oct 24 10:41:11.905: INFO: Pod pod-configmaps-f745ba78-4d03-431d-b464-39cfc52d5b2f no longer exists
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:41:11.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-114" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":306,"completed":166,"skipped":2665,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Oct 24 10:41:12.201: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 24 10:41:17.337: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:41:35.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1021" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":306,"completed":167,"skipped":2671,"failed":0}
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-map-8e316f29-c1db-477c-aa2f-0d4c6fe3f689
STEP: Creating a pod to test consume secrets
Oct 24 10:41:36.442: INFO: Waiting up to 5m0s for pod "pod-secrets-999a4c92-ee4b-4a0b-b88e-d86c8e09dea5" in namespace "secrets-3163" to be "Succeeded or Failed"
Oct 24 10:41:36.522: INFO: Pod "pod-secrets-999a4c92-ee4b-4a0b-b88e-d86c8e09dea5": Phase="Pending", Reason="", readiness=false. Elapsed: 80.143742ms
Oct 24 10:41:38.562: INFO: Pod "pod-secrets-999a4c92-ee4b-4a0b-b88e-d86c8e09dea5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.119272354s
STEP: Saw pod success
Oct 24 10:41:38.562: INFO: Pod "pod-secrets-999a4c92-ee4b-4a0b-b88e-d86c8e09dea5" satisfied condition "Succeeded or Failed"
Oct 24 10:41:38.642: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-secrets-999a4c92-ee4b-4a0b-b88e-d86c8e09dea5 container secret-volume-test: <nil>
STEP: delete the pod
Oct 24 10:41:38.728: INFO: Waiting for pod pod-secrets-999a4c92-ee4b-4a0b-b88e-d86c8e09dea5 to disappear
Oct 24 10:41:38.765: INFO: Pod pod-secrets-999a4c92-ee4b-4a0b-b88e-d86c8e09dea5 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:41:38.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3163" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":306,"completed":168,"skipped":2674,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 48 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:41:40.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5605" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":306,"completed":169,"skipped":2691,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Oct 24 10:41:41.016: INFO: stderr: ""
Oct 24 10:41:41.016: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\nbatch/v2alpha1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncloud.google.com/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nmetrics.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1alpha1\nscheduling.k8s.io/v1beta1\nsnapshot.storage.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:41:41.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4111" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":306,"completed":170,"skipped":2698,"failed":0}

------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 24 10:41:41.337: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7dda45b9-355c-499c-8890-1eb91d976378" in namespace "downward-api-6343" to be "Succeeded or Failed"
Oct 24 10:41:41.374: INFO: Pod "downwardapi-volume-7dda45b9-355c-499c-8890-1eb91d976378": Phase="Pending", Reason="", readiness=false. Elapsed: 36.816401ms
Oct 24 10:41:43.412: INFO: Pod "downwardapi-volume-7dda45b9-355c-499c-8890-1eb91d976378": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.074599535s
STEP: Saw pod success
Oct 24 10:41:43.412: INFO: Pod "downwardapi-volume-7dda45b9-355c-499c-8890-1eb91d976378" satisfied condition "Succeeded or Failed"
Oct 24 10:41:43.450: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downwardapi-volume-7dda45b9-355c-499c-8890-1eb91d976378 container client-container: <nil>
STEP: delete the pod
Oct 24 10:41:43.544: INFO: Waiting for pod downwardapi-volume-7dda45b9-355c-499c-8890-1eb91d976378 to disappear
Oct 24 10:41:43.582: INFO: Pod downwardapi-volume-7dda45b9-355c-499c-8890-1eb91d976378 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:41:43.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6343" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":306,"completed":171,"skipped":2698,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Oct 24 10:42:02.400: INFO: Restart count of pod container-probe-6932/liveness-2a540eb0-e60b-498e-9f08-6b8ea057eba4 is now 1 (16.351267232s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:42:02.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6932" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":306,"completed":172,"skipped":2713,"failed":0}
SSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Oct 24 10:42:58.135: INFO: Restart count of pod container-probe-8503/busybox-29e3bae8-0b9c-4edc-a13e-09aaa7f75cbf is now 1 (53.260534817s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:42:58.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8503" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":306,"completed":173,"skipped":2718,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-75cbc51d-493e-485f-85dc-66fbe8570bc5
STEP: Creating a pod to test consume configMaps
Oct 24 10:42:58.587: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ff7adf81-5c2e-4393-a88f-1964c534e92d" in namespace "projected-4101" to be "Succeeded or Failed"
Oct 24 10:42:58.628: INFO: Pod "pod-projected-configmaps-ff7adf81-5c2e-4393-a88f-1964c534e92d": Phase="Pending", Reason="", readiness=false. Elapsed: 40.963836ms
Oct 24 10:43:00.671: INFO: Pod "pod-projected-configmaps-ff7adf81-5c2e-4393-a88f-1964c534e92d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.084068683s
STEP: Saw pod success
Oct 24 10:43:00.671: INFO: Pod "pod-projected-configmaps-ff7adf81-5c2e-4393-a88f-1964c534e92d" satisfied condition "Succeeded or Failed"
Oct 24 10:43:00.712: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-projected-configmaps-ff7adf81-5c2e-4393-a88f-1964c534e92d container projected-configmap-volume-test: <nil>
STEP: delete the pod
Oct 24 10:43:01.030: INFO: Waiting for pod pod-projected-configmaps-ff7adf81-5c2e-4393-a88f-1964c534e92d to disappear
Oct 24 10:43:01.069: INFO: Pod pod-projected-configmaps-ff7adf81-5c2e-4393-a88f-1964c534e92d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:43:01.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4101" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":306,"completed":174,"skipped":2727,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-61639fbf-f111-41f7-aaf8-25067283e2d0
STEP: Creating a pod to test consume secrets
Oct 24 10:43:01.455: INFO: Waiting up to 5m0s for pod "pod-secrets-4f55c251-e453-4501-9b4c-6b535fde23fc" in namespace "secrets-7295" to be "Succeeded or Failed"
Oct 24 10:43:01.504: INFO: Pod "pod-secrets-4f55c251-e453-4501-9b4c-6b535fde23fc": Phase="Pending", Reason="", readiness=false. Elapsed: 49.021792ms
Oct 24 10:43:03.549: INFO: Pod "pod-secrets-4f55c251-e453-4501-9b4c-6b535fde23fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.094362126s
STEP: Saw pod success
Oct 24 10:43:03.549: INFO: Pod "pod-secrets-4f55c251-e453-4501-9b4c-6b535fde23fc" satisfied condition "Succeeded or Failed"
Oct 24 10:43:03.599: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-secrets-4f55c251-e453-4501-9b4c-6b535fde23fc container secret-volume-test: <nil>
STEP: delete the pod
Oct 24 10:43:03.753: INFO: Waiting for pod pod-secrets-4f55c251-e453-4501-9b4c-6b535fde23fc to disappear
Oct 24 10:43:03.812: INFO: Pod pod-secrets-4f55c251-e453-4501-9b4c-6b535fde23fc no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:43:03.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7295" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":175,"skipped":2766,"failed":0}
SS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 17 lines ...
STEP: creating replication controller affinity-clusterip-timeout in namespace services-8096
I1024 10:43:07.293272  143945 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-8096, replica count: 3
I1024 10:43:10.393747  143945 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 24 10:43:10.595: INFO: Creating new exec pod
Oct 24 10:43:14.050: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-8096 exec execpod-affinity79z6n -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Oct 24 10:43:15.732: INFO: rc: 1
Oct 24 10:43:15.732: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-8096 exec execpod-affinity79z6n -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 24 10:43:16.733: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-8096 exec execpod-affinity79z6n -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Oct 24 10:43:18.358: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n"
Oct 24 10:43:18.359: INFO: stdout: ""
Oct 24 10:43:18.359: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-8096 exec execpod-affinity79z6n -- /bin/sh -x -c nc -zv -t -w 2 10.0.19.87 80'
... skipping 31 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:43:58.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8096" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":306,"completed":176,"skipped":2768,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name projected-secret-test-a18ac0f8-d7aa-4ee7-8740-c9812c91492b
STEP: Creating a pod to test consume secrets
Oct 24 10:43:59.047: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5ceba471-9238-45f2-8f6c-3a6d3ce9dfc2" in namespace "projected-2681" to be "Succeeded or Failed"
Oct 24 10:43:59.093: INFO: Pod "pod-projected-secrets-5ceba471-9238-45f2-8f6c-3a6d3ce9dfc2": Phase="Pending", Reason="", readiness=false. Elapsed: 46.128168ms
Oct 24 10:44:01.133: INFO: Pod "pod-projected-secrets-5ceba471-9238-45f2-8f6c-3a6d3ce9dfc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.085928099s
STEP: Saw pod success
Oct 24 10:44:01.133: INFO: Pod "pod-projected-secrets-5ceba471-9238-45f2-8f6c-3a6d3ce9dfc2" satisfied condition "Succeeded or Failed"
Oct 24 10:44:01.172: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-projected-secrets-5ceba471-9238-45f2-8f6c-3a6d3ce9dfc2 container secret-volume-test: <nil>
STEP: delete the pod
Oct 24 10:44:01.259: INFO: Waiting for pod pod-projected-secrets-5ceba471-9238-45f2-8f6c-3a6d3ce9dfc2 to disappear
Oct 24 10:44:01.297: INFO: Pod pod-projected-secrets-5ceba471-9238-45f2-8f6c-3a6d3ce9dfc2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:44:01.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2681" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":306,"completed":177,"skipped":2813,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 24 10:44:01.618: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7d66acc0-bace-4281-b342-5a696e976d75" in namespace "downward-api-9293" to be "Succeeded or Failed"
Oct 24 10:44:01.654: INFO: Pod "downwardapi-volume-7d66acc0-bace-4281-b342-5a696e976d75": Phase="Pending", Reason="", readiness=false. Elapsed: 36.415091ms
Oct 24 10:44:03.712: INFO: Pod "downwardapi-volume-7d66acc0-bace-4281-b342-5a696e976d75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.093820106s
STEP: Saw pod success
Oct 24 10:44:03.712: INFO: Pod "downwardapi-volume-7d66acc0-bace-4281-b342-5a696e976d75" satisfied condition "Succeeded or Failed"
Oct 24 10:44:03.781: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downwardapi-volume-7d66acc0-bace-4281-b342-5a696e976d75 container client-container: <nil>
STEP: delete the pod
Oct 24 10:44:03.982: INFO: Waiting for pod downwardapi-volume-7d66acc0-bace-4281-b342-5a696e976d75 to disappear
Oct 24 10:44:04.089: INFO: Pod downwardapi-volume-7d66acc0-bace-4281-b342-5a696e976d75 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:44:04.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9293" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":306,"completed":178,"skipped":2825,"failed":0}
SSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicaSet
... skipping 12 lines ...
Oct 24 10:44:09.945: INFO: Trying to dial the pod
Oct 24 10:44:15.067: INFO: Controller my-hostname-basic-3ec3bfee-ee6b-4ef9-9b8f-5bd8e882a121: Got expected result from replica 1 [my-hostname-basic-3ec3bfee-ee6b-4ef9-9b8f-5bd8e882a121-wzl8m]: "my-hostname-basic-3ec3bfee-ee6b-4ef9-9b8f-5bd8e882a121-wzl8m", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:44:15.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6010" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":306,"completed":179,"skipped":2830,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:44:17.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9634" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":306,"completed":180,"skipped":2839,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 24 10:44:18.275: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fcc79c49-9441-4a6d-87f2-e934fe45aff3" in namespace "downward-api-4653" to be "Succeeded or Failed"
Oct 24 10:44:18.316: INFO: Pod "downwardapi-volume-fcc79c49-9441-4a6d-87f2-e934fe45aff3": Phase="Pending", Reason="", readiness=false. Elapsed: 41.007642ms
Oct 24 10:44:20.379: INFO: Pod "downwardapi-volume-fcc79c49-9441-4a6d-87f2-e934fe45aff3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.103771069s
STEP: Saw pod success
Oct 24 10:44:20.379: INFO: Pod "downwardapi-volume-fcc79c49-9441-4a6d-87f2-e934fe45aff3" satisfied condition "Succeeded or Failed"
Oct 24 10:44:20.419: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downwardapi-volume-fcc79c49-9441-4a6d-87f2-e934fe45aff3 container client-container: <nil>
STEP: delete the pod
Oct 24 10:44:20.670: INFO: Waiting for pod downwardapi-volume-fcc79c49-9441-4a6d-87f2-e934fe45aff3 to disappear
Oct 24 10:44:20.709: INFO: Pod downwardapi-volume-fcc79c49-9441-4a6d-87f2-e934fe45aff3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:44:20.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4653" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":306,"completed":181,"skipped":2862,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Secrets
... skipping 10 lines ...
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:44:21.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7944" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":306,"completed":182,"skipped":2886,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 28 lines ...
Oct 24 10:44:37.871: INFO: stderr: ""
Oct 24 10:44:37.871: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:44:37.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4323" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":306,"completed":183,"skipped":2908,"failed":0}
S
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Docker Containers
... skipping 5 lines ...
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:44:40.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9896" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":306,"completed":184,"skipped":2909,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:44:52.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3562" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":306,"completed":185,"skipped":2923,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Oct 24 10:44:52.356: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:44:56.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8747" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":306,"completed":186,"skipped":2965,"failed":0}
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 101 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:45:04.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3657" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":306,"completed":187,"skipped":2967,"failed":0}
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] version v1
... skipping 342 lines ...
Oct 24 10:45:16.137: INFO: Deleting ReplicationController proxy-service-pspcv took: 144.97768ms
Oct 24 10:45:16.937: INFO: Terminating ReplicationController proxy-service-pspcv pods took: 800.321588ms
[AfterEach] version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:45:28.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9534" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":306,"completed":188,"skipped":2970,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 24 lines ...
Oct 24 10:45:30.469: INFO: created pod pod-service-account-nomountsa-nomountspec
Oct 24 10:45:30.469: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:45:30.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1738" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":306,"completed":189,"skipped":2999,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 44 lines ...
Oct 24 10:46:14.149: INFO: Deleting pod "simpletest.rc-qb744" in namespace "gc-1027"
Oct 24 10:46:14.240: INFO: Deleting pod "simpletest.rc-rvf2j" in namespace "gc-1027"
[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:46:14.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1027" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":306,"completed":190,"skipped":3021,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 28 lines ...
Oct 24 10:47:26.592: INFO: Terminating ReplicationController wrapped-volume-race-0386957c-3678-48ea-b39d-0d254ec9d19b pods took: 700.347652ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:47:43.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9362" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":306,"completed":191,"skipped":3071,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:47:51.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8869" for this suite.
STEP: Destroying namespace "webhook-8869-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":306,"completed":192,"skipped":3083,"failed":0}
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Networking
... skipping 40 lines ...
Oct 24 10:48:15.754: INFO: reached 10.64.1.186 after 0/1 tries
Oct 24 10:48:15.754: INFO: Going to retry 0 out of 3 pods....
[AfterEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:48:15.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-860" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":306,"completed":193,"skipped":3087,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 58 lines ...
Oct 24 10:48:38.486: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"16448"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:48:38.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1410" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":306,"completed":194,"skipped":3118,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-secret-l98d
STEP: Creating a pod to test atomic-volume-subpath
Oct 24 10:48:39.242: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-l98d" in namespace "subpath-9731" to be "Succeeded or Failed"
Oct 24 10:48:39.298: INFO: Pod "pod-subpath-test-secret-l98d": Phase="Pending", Reason="", readiness=false. Elapsed: 56.068703ms
Oct 24 10:48:41.338: INFO: Pod "pod-subpath-test-secret-l98d": Phase="Running", Reason="", readiness=true. Elapsed: 2.095870635s
Oct 24 10:48:43.378: INFO: Pod "pod-subpath-test-secret-l98d": Phase="Running", Reason="", readiness=true. Elapsed: 4.13593873s
Oct 24 10:48:45.455: INFO: Pod "pod-subpath-test-secret-l98d": Phase="Running", Reason="", readiness=true. Elapsed: 6.212656753s
Oct 24 10:48:47.495: INFO: Pod "pod-subpath-test-secret-l98d": Phase="Running", Reason="", readiness=true. Elapsed: 8.252656336s
Oct 24 10:48:49.540: INFO: Pod "pod-subpath-test-secret-l98d": Phase="Running", Reason="", readiness=true. Elapsed: 10.297361188s
Oct 24 10:48:51.608: INFO: Pod "pod-subpath-test-secret-l98d": Phase="Running", Reason="", readiness=true. Elapsed: 12.365356112s
Oct 24 10:48:53.648: INFO: Pod "pod-subpath-test-secret-l98d": Phase="Running", Reason="", readiness=true. Elapsed: 14.405388709s
Oct 24 10:48:55.689: INFO: Pod "pod-subpath-test-secret-l98d": Phase="Running", Reason="", readiness=true. Elapsed: 16.447004176s
Oct 24 10:48:57.739: INFO: Pod "pod-subpath-test-secret-l98d": Phase="Running", Reason="", readiness=true. Elapsed: 18.496819407s
Oct 24 10:48:59.780: INFO: Pod "pod-subpath-test-secret-l98d": Phase="Running", Reason="", readiness=true. Elapsed: 20.537500097s
Oct 24 10:49:01.823: INFO: Pod "pod-subpath-test-secret-l98d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.580699561s
STEP: Saw pod success
Oct 24 10:49:01.823: INFO: Pod "pod-subpath-test-secret-l98d" satisfied condition "Succeeded or Failed"
Oct 24 10:49:01.863: INFO: Trying to get logs from node bootstrap-e2e-minion-group-g27b pod pod-subpath-test-secret-l98d container test-container-subpath-secret-l98d: <nil>
STEP: delete the pod
Oct 24 10:49:01.987: INFO: Waiting for pod pod-subpath-test-secret-l98d to disappear
Oct 24 10:49:02.027: INFO: Pod pod-subpath-test-secret-l98d no longer exists
STEP: Deleting pod pod-subpath-test-secret-l98d
Oct 24 10:49:02.027: INFO: Deleting pod "pod-subpath-test-secret-l98d" in namespace "subpath-9731"
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:49:02.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9731" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":306,"completed":195,"skipped":3156,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicaSet
... skipping 11 lines ...
Oct 24 10:49:04.810: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:49:04.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2579" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":306,"completed":196,"skipped":3182,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
Oct 24 10:49:23.192: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 24 10:49:27.220: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:49:43.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5148" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":306,"completed":197,"skipped":3200,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 24 10:49:43.991: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a4a17b9a-d06c-4aa9-b55e-27cfc853307b" in namespace "downward-api-7444" to be "Succeeded or Failed"
Oct 24 10:49:44.031: INFO: Pod "downwardapi-volume-a4a17b9a-d06c-4aa9-b55e-27cfc853307b": Phase="Pending", Reason="", readiness=false. Elapsed: 39.864062ms
Oct 24 10:49:46.100: INFO: Pod "downwardapi-volume-a4a17b9a-d06c-4aa9-b55e-27cfc853307b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.109125278s
STEP: Saw pod success
Oct 24 10:49:46.100: INFO: Pod "downwardapi-volume-a4a17b9a-d06c-4aa9-b55e-27cfc853307b" satisfied condition "Succeeded or Failed"
Oct 24 10:49:46.171: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downwardapi-volume-a4a17b9a-d06c-4aa9-b55e-27cfc853307b container client-container: <nil>
STEP: delete the pod
Oct 24 10:49:46.531: INFO: Waiting for pod downwardapi-volume-a4a17b9a-d06c-4aa9-b55e-27cfc853307b to disappear
Oct 24 10:49:46.570: INFO: Pod downwardapi-volume-a4a17b9a-d06c-4aa9-b55e-27cfc853307b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:49:46.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7444" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":306,"completed":198,"skipped":3201,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 24 10:49:46.953: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e4fc1e9c-655e-4bc8-8b2e-8c090c40ac4f" in namespace "downward-api-2090" to be "Succeeded or Failed"
Oct 24 10:49:46.993: INFO: Pod "downwardapi-volume-e4fc1e9c-655e-4bc8-8b2e-8c090c40ac4f": Phase="Pending", Reason="", readiness=false. Elapsed: 39.252801ms
Oct 24 10:49:49.040: INFO: Pod "downwardapi-volume-e4fc1e9c-655e-4bc8-8b2e-8c090c40ac4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.086962772s
STEP: Saw pod success
Oct 24 10:49:49.041: INFO: Pod "downwardapi-volume-e4fc1e9c-655e-4bc8-8b2e-8c090c40ac4f" satisfied condition "Succeeded or Failed"
Oct 24 10:49:49.080: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downwardapi-volume-e4fc1e9c-655e-4bc8-8b2e-8c090c40ac4f container client-container: <nil>
STEP: delete the pod
Oct 24 10:49:49.178: INFO: Waiting for pod downwardapi-volume-e4fc1e9c-655e-4bc8-8b2e-8c090c40ac4f to disappear
Oct 24 10:49:49.218: INFO: Pod downwardapi-volume-e4fc1e9c-655e-4bc8-8b2e-8c090c40ac4f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:49:49.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2090" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":306,"completed":199,"skipped":3210,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Oct 24 10:49:52.871: INFO: Successfully updated pod "labelsupdate8b74c61b-5fe8-4284-b2bb-d023401fdf0b"
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:49:55.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2979" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":306,"completed":200,"skipped":3230,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 24 10:49:55.253: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir volume type on tmpfs
Oct 24 10:49:55.498: INFO: Waiting up to 5m0s for pod "pod-cc5b6922-0fed-4dbf-8079-54a63018a143" in namespace "emptydir-5554" to be "Succeeded or Failed"
Oct 24 10:49:55.537: INFO: Pod "pod-cc5b6922-0fed-4dbf-8079-54a63018a143": Phase="Pending", Reason="", readiness=false. Elapsed: 39.381741ms
Oct 24 10:49:57.577: INFO: Pod "pod-cc5b6922-0fed-4dbf-8079-54a63018a143": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.078988137s
STEP: Saw pod success
Oct 24 10:49:57.577: INFO: Pod "pod-cc5b6922-0fed-4dbf-8079-54a63018a143" satisfied condition "Succeeded or Failed"
Oct 24 10:49:57.616: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-cc5b6922-0fed-4dbf-8079-54a63018a143 container test-container: <nil>
STEP: delete the pod
Oct 24 10:49:57.846: INFO: Waiting for pod pod-cc5b6922-0fed-4dbf-8079-54a63018a143 to disappear
Oct 24 10:49:57.886: INFO: Pod pod-cc5b6922-0fed-4dbf-8079-54a63018a143 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:49:57.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5554" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":201,"skipped":3231,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-projected-stt9
STEP: Creating a pod to test atomic-volume-subpath
Oct 24 10:49:58.537: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-stt9" in namespace "subpath-4369" to be "Succeeded or Failed"
Oct 24 10:49:58.603: INFO: Pod "pod-subpath-test-projected-stt9": Phase="Pending", Reason="", readiness=false. Elapsed: 66.46331ms
Oct 24 10:50:00.643: INFO: Pod "pod-subpath-test-projected-stt9": Phase="Running", Reason="", readiness=true. Elapsed: 2.106204879s
Oct 24 10:50:02.688: INFO: Pod "pod-subpath-test-projected-stt9": Phase="Running", Reason="", readiness=true. Elapsed: 4.151169593s
Oct 24 10:50:04.770: INFO: Pod "pod-subpath-test-projected-stt9": Phase="Running", Reason="", readiness=true. Elapsed: 6.233654631s
Oct 24 10:50:06.812: INFO: Pod "pod-subpath-test-projected-stt9": Phase="Running", Reason="", readiness=true. Elapsed: 8.274914446s
Oct 24 10:50:08.858: INFO: Pod "pod-subpath-test-projected-stt9": Phase="Running", Reason="", readiness=true. Elapsed: 10.321609264s
Oct 24 10:50:10.956: INFO: Pod "pod-subpath-test-projected-stt9": Phase="Running", Reason="", readiness=true. Elapsed: 12.41900884s
Oct 24 10:50:12.996: INFO: Pod "pod-subpath-test-projected-stt9": Phase="Running", Reason="", readiness=true. Elapsed: 14.459158896s
Oct 24 10:50:15.041: INFO: Pod "pod-subpath-test-projected-stt9": Phase="Running", Reason="", readiness=true. Elapsed: 16.504271282s
Oct 24 10:50:17.081: INFO: Pod "pod-subpath-test-projected-stt9": Phase="Running", Reason="", readiness=true. Elapsed: 18.544320145s
Oct 24 10:50:19.121: INFO: Pod "pod-subpath-test-projected-stt9": Phase="Running", Reason="", readiness=true. Elapsed: 20.583824509s
Oct 24 10:50:21.160: INFO: Pod "pod-subpath-test-projected-stt9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.623587638s
STEP: Saw pod success
Oct 24 10:50:21.160: INFO: Pod "pod-subpath-test-projected-stt9" satisfied condition "Succeeded or Failed"
Oct 24 10:50:21.200: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-subpath-test-projected-stt9 container test-container-subpath-projected-stt9: <nil>
STEP: delete the pod
Oct 24 10:50:21.292: INFO: Waiting for pod pod-subpath-test-projected-stt9 to disappear
Oct 24 10:50:21.330: INFO: Pod pod-subpath-test-projected-stt9 no longer exists
STEP: Deleting pod pod-subpath-test-projected-stt9
Oct 24 10:50:21.331: INFO: Deleting pod "pod-subpath-test-projected-stt9" in namespace "subpath-4369"
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:50:21.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4369" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":306,"completed":202,"skipped":3243,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-300cd7fb-5251-49e5-9fe4-c244c6dddecd
STEP: Creating a pod to test consume secrets
Oct 24 10:50:21.917: INFO: Waiting up to 5m0s for pod "pod-secrets-02dfb056-b041-4b39-8d2f-34d6aab96251" in namespace "secrets-4946" to be "Succeeded or Failed"
Oct 24 10:50:21.962: INFO: Pod "pod-secrets-02dfb056-b041-4b39-8d2f-34d6aab96251": Phase="Pending", Reason="", readiness=false. Elapsed: 45.518877ms
Oct 24 10:50:24.002: INFO: Pod "pod-secrets-02dfb056-b041-4b39-8d2f-34d6aab96251": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.085279742s
STEP: Saw pod success
Oct 24 10:50:24.002: INFO: Pod "pod-secrets-02dfb056-b041-4b39-8d2f-34d6aab96251" satisfied condition "Succeeded or Failed"
Oct 24 10:50:24.041: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-secrets-02dfb056-b041-4b39-8d2f-34d6aab96251 container secret-volume-test: <nil>
STEP: delete the pod
Oct 24 10:50:24.133: INFO: Waiting for pod pod-secrets-02dfb056-b041-4b39-8d2f-34d6aab96251 to disappear
Oct 24 10:50:24.172: INFO: Pod pod-secrets-02dfb056-b041-4b39-8d2f-34d6aab96251 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:50:24.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4946" for this suite.
STEP: Destroying namespace "secret-namespace-1538" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":306,"completed":203,"skipped":3258,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Ingress API 
  should support creating Ingress API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Ingress API
... skipping 25 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] Ingress API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:50:25.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-8203" for this suite.
•{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":306,"completed":204,"skipped":3267,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 12 lines ...
Oct 24 10:50:28.358: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 24 10:50:28.752: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:50:28.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7128" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":306,"completed":205,"skipped":3281,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 24 10:50:29.225: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c74146c-5bfd-4b19-a6e0-3994e2cb070a" in namespace "projected-5378" to be "Succeeded or Failed"
Oct 24 10:50:29.332: INFO: Pod "downwardapi-volume-5c74146c-5bfd-4b19-a6e0-3994e2cb070a": Phase="Pending", Reason="", readiness=false. Elapsed: 106.756099ms
Oct 24 10:50:31.372: INFO: Pod "downwardapi-volume-5c74146c-5bfd-4b19-a6e0-3994e2cb070a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.14695834s
STEP: Saw pod success
Oct 24 10:50:31.372: INFO: Pod "downwardapi-volume-5c74146c-5bfd-4b19-a6e0-3994e2cb070a" satisfied condition "Succeeded or Failed"
Oct 24 10:50:31.413: INFO: Trying to get logs from node bootstrap-e2e-minion-group-g27b pod downwardapi-volume-5c74146c-5bfd-4b19-a6e0-3994e2cb070a container client-container: <nil>
STEP: delete the pod
Oct 24 10:50:31.504: INFO: Waiting for pod downwardapi-volume-5c74146c-5bfd-4b19-a6e0-3994e2cb070a to disappear
Oct 24 10:50:31.543: INFO: Pod downwardapi-volume-5c74146c-5bfd-4b19-a6e0-3994e2cb070a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:50:31.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5378" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":306,"completed":206,"skipped":3309,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-9562b4ce-601d-40a1-93db-e554b398a5e8
STEP: Creating a pod to test consume configMaps
Oct 24 10:50:31.907: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-04e7ea54-90ef-484b-b7f6-116f0296588b" in namespace "projected-128" to be "Succeeded or Failed"
Oct 24 10:50:31.955: INFO: Pod "pod-projected-configmaps-04e7ea54-90ef-484b-b7f6-116f0296588b": Phase="Pending", Reason="", readiness=false. Elapsed: 47.555134ms
Oct 24 10:50:34.000: INFO: Pod "pod-projected-configmaps-04e7ea54-90ef-484b-b7f6-116f0296588b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.093168543s
STEP: Saw pod success
Oct 24 10:50:34.001: INFO: Pod "pod-projected-configmaps-04e7ea54-90ef-484b-b7f6-116f0296588b" satisfied condition "Succeeded or Failed"
Oct 24 10:50:34.053: INFO: Trying to get logs from node bootstrap-e2e-minion-group-g27b pod pod-projected-configmaps-04e7ea54-90ef-484b-b7f6-116f0296588b container agnhost-container: <nil>
STEP: delete the pod
Oct 24 10:50:34.176: INFO: Waiting for pod pod-projected-configmaps-04e7ea54-90ef-484b-b7f6-116f0296588b to disappear
Oct 24 10:50:34.247: INFO: Pod pod-projected-configmaps-04e7ea54-90ef-484b-b7f6-116f0296588b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:50:34.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-128" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":207,"skipped":3325,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 19 lines ...
Oct 24 10:52:07.865: INFO: Noticed Pod "taint-eviction-b2" gets evicted.
STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:52:07.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-multiple-pods-3392" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":306,"completed":208,"skipped":3352,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 15 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:52:14.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3482" for this suite.
STEP: Destroying namespace "nsdeletetest-5371" for this suite.
Oct 24 10:52:14.893: INFO: Namespace nsdeletetest-5371 was already deleted
STEP: Destroying namespace "nsdeletetest-4552" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":306,"completed":209,"skipped":3369,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Watchers
... skipping 26 lines ...
Oct 24 10:53:05.632: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4970  3dcd44a4-b7a4-438e-9552-711bb61a7d9d 17340 0 2020-10-24 10:52:55 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-10-24 10:52:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Oct 24 10:53:05.632: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4970  3dcd44a4-b7a4-438e-9552-711bb61a7d9d 17340 0 2020-10-24 10:52:55 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-10-24 10:52:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:53:15.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4970" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":306,"completed":210,"skipped":3375,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Oct 24 10:53:15.919: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:53:16.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-397" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":306,"completed":211,"skipped":3380,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-configmap-mf68
STEP: Creating a pod to test atomic-volume-subpath
Oct 24 10:53:16.651: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mf68" in namespace "subpath-5918" to be "Succeeded or Failed"
Oct 24 10:53:16.690: INFO: Pod "pod-subpath-test-configmap-mf68": Phase="Pending", Reason="", readiness=false. Elapsed: 39.031091ms
Oct 24 10:53:18.731: INFO: Pod "pod-subpath-test-configmap-mf68": Phase="Running", Reason="", readiness=true. Elapsed: 2.080047178s
Oct 24 10:53:20.806: INFO: Pod "pod-subpath-test-configmap-mf68": Phase="Running", Reason="", readiness=true. Elapsed: 4.155158794s
Oct 24 10:53:22.846: INFO: Pod "pod-subpath-test-configmap-mf68": Phase="Running", Reason="", readiness=true. Elapsed: 6.195054551s
Oct 24 10:53:24.888: INFO: Pod "pod-subpath-test-configmap-mf68": Phase="Running", Reason="", readiness=true. Elapsed: 8.237160124s
Oct 24 10:53:26.928: INFO: Pod "pod-subpath-test-configmap-mf68": Phase="Running", Reason="", readiness=true. Elapsed: 10.277163415s
Oct 24 10:53:28.968: INFO: Pod "pod-subpath-test-configmap-mf68": Phase="Running", Reason="", readiness=true. Elapsed: 12.316626453s
Oct 24 10:53:31.008: INFO: Pod "pod-subpath-test-configmap-mf68": Phase="Running", Reason="", readiness=true. Elapsed: 14.3573733s
Oct 24 10:53:33.048: INFO: Pod "pod-subpath-test-configmap-mf68": Phase="Running", Reason="", readiness=true. Elapsed: 16.397289232s
Oct 24 10:53:35.088: INFO: Pod "pod-subpath-test-configmap-mf68": Phase="Running", Reason="", readiness=true. Elapsed: 18.436902825s
Oct 24 10:53:37.128: INFO: Pod "pod-subpath-test-configmap-mf68": Phase="Running", Reason="", readiness=true. Elapsed: 20.476584991s
Oct 24 10:53:39.167: INFO: Pod "pod-subpath-test-configmap-mf68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.515983341s
STEP: Saw pod success
Oct 24 10:53:39.167: INFO: Pod "pod-subpath-test-configmap-mf68" satisfied condition "Succeeded or Failed"
Oct 24 10:53:39.207: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-subpath-test-configmap-mf68 container test-container-subpath-configmap-mf68: <nil>
STEP: delete the pod
Oct 24 10:53:39.306: INFO: Waiting for pod pod-subpath-test-configmap-mf68 to disappear
Oct 24 10:53:39.345: INFO: Pod pod-subpath-test-configmap-mf68 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-mf68
Oct 24 10:53:39.345: INFO: Deleting pod "pod-subpath-test-configmap-mf68" in namespace "subpath-5918"
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:53:39.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5918" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":306,"completed":212,"skipped":3410,"failed":0}
SSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Oct 24 10:53:41.827: INFO: Initial restart count of pod liveness-ce3604f6-d1e7-4cce-8773-519222968ba8 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:57:43.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2018" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":306,"completed":213,"skipped":3416,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 24 10:57:43.967: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct 24 10:57:44.209: INFO: Waiting up to 5m0s for pod "pod-73a59ab1-0dcc-4818-b514-96b104515be5" in namespace "emptydir-7800" to be "Succeeded or Failed"
Oct 24 10:57:44.248: INFO: Pod "pod-73a59ab1-0dcc-4818-b514-96b104515be5": Phase="Pending", Reason="", readiness=false. Elapsed: 39.427808ms
Oct 24 10:57:46.294: INFO: Pod "pod-73a59ab1-0dcc-4818-b514-96b104515be5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.085462248s
STEP: Saw pod success
Oct 24 10:57:46.294: INFO: Pod "pod-73a59ab1-0dcc-4818-b514-96b104515be5" satisfied condition "Succeeded or Failed"
Oct 24 10:57:46.353: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-73a59ab1-0dcc-4818-b514-96b104515be5 container test-container: <nil>
STEP: delete the pod
Oct 24 10:57:46.654: INFO: Waiting for pod pod-73a59ab1-0dcc-4818-b514-96b104515be5 to disappear
Oct 24 10:57:46.805: INFO: Pod pod-73a59ab1-0dcc-4818-b514-96b104515be5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:57:46.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7800" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":214,"skipped":3454,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-63253325-b27a-46bf-90d0-0ed00a5b660c
STEP: Creating a pod to test consume configMaps
Oct 24 10:57:47.535: INFO: Waiting up to 5m0s for pod "pod-configmaps-d8c2dda2-f0cb-4445-a92c-4732ed427c1f" in namespace "configmap-1437" to be "Succeeded or Failed"
Oct 24 10:57:47.579: INFO: Pod "pod-configmaps-d8c2dda2-f0cb-4445-a92c-4732ed427c1f": Phase="Pending", Reason="", readiness=false. Elapsed: 43.691695ms
Oct 24 10:57:49.745: INFO: Pod "pod-configmaps-d8c2dda2-f0cb-4445-a92c-4732ed427c1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.209781838s
STEP: Saw pod success
Oct 24 10:57:49.745: INFO: Pod "pod-configmaps-d8c2dda2-f0cb-4445-a92c-4732ed427c1f" satisfied condition "Succeeded or Failed"
Oct 24 10:57:49.785: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-configmaps-d8c2dda2-f0cb-4445-a92c-4732ed427c1f container configmap-volume-test: <nil>
STEP: delete the pod
Oct 24 10:57:49.880: INFO: Waiting for pod pod-configmaps-d8c2dda2-f0cb-4445-a92c-4732ed427c1f to disappear
Oct 24 10:57:49.923: INFO: Pod pod-configmaps-d8c2dda2-f0cb-4445-a92c-4732ed427c1f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:57:49.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1437" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":306,"completed":215,"skipped":3474,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap configmap-2574/configmap-test-9257b767-065d-411f-a3c3-706d0bcb3b5c
STEP: Creating a pod to test consume configMaps
Oct 24 10:57:50.299: INFO: Waiting up to 5m0s for pod "pod-configmaps-894bc22f-c98b-4483-86ed-a1c675f0c796" in namespace "configmap-2574" to be "Succeeded or Failed"
Oct 24 10:57:50.339: INFO: Pod "pod-configmaps-894bc22f-c98b-4483-86ed-a1c675f0c796": Phase="Pending", Reason="", readiness=false. Elapsed: 39.496529ms
Oct 24 10:57:52.388: INFO: Pod "pod-configmaps-894bc22f-c98b-4483-86ed-a1c675f0c796": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.08934171s
STEP: Saw pod success
Oct 24 10:57:52.389: INFO: Pod "pod-configmaps-894bc22f-c98b-4483-86ed-a1c675f0c796" satisfied condition "Succeeded or Failed"
Oct 24 10:57:52.470: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-configmaps-894bc22f-c98b-4483-86ed-a1c675f0c796 container env-test: <nil>
STEP: delete the pod
Oct 24 10:57:52.722: INFO: Waiting for pod pod-configmaps-894bc22f-c98b-4483-86ed-a1c675f0c796 to disappear
Oct 24 10:57:52.791: INFO: Pod pod-configmaps-894bc22f-c98b-4483-86ed-a1c675f0c796 no longer exists
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:57:52.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2574" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":306,"completed":216,"skipped":3497,"failed":0}
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 25 lines ...
Oct 24 10:58:09.834: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:58:09.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2049" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":306,"completed":217,"skipped":3501,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-a9ae3b13-344a-4bd0-8501-00d28cf0e360
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:59:40.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2017" for this suite.
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":306,"completed":218,"skipped":3522,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 16 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 10:59:54.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1398" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":306,"completed":219,"skipped":3528,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-configmap-dxlh
STEP: Creating a pod to test atomic-volume-subpath
Oct 24 10:59:54.631: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dxlh" in namespace "subpath-3341" to be "Succeeded or Failed"
Oct 24 10:59:54.680: INFO: Pod "pod-subpath-test-configmap-dxlh": Phase="Pending", Reason="", readiness=false. Elapsed: 48.515888ms
Oct 24 10:59:56.719: INFO: Pod "pod-subpath-test-configmap-dxlh": Phase="Running", Reason="", readiness=true. Elapsed: 2.088231658s
Oct 24 10:59:58.759: INFO: Pod "pod-subpath-test-configmap-dxlh": Phase="Running", Reason="", readiness=true. Elapsed: 4.128199055s
Oct 24 11:00:00.801: INFO: Pod "pod-subpath-test-configmap-dxlh": Phase="Running", Reason="", readiness=true. Elapsed: 6.169888967s
Oct 24 11:00:02.841: INFO: Pod "pod-subpath-test-configmap-dxlh": Phase="Running", Reason="", readiness=true. Elapsed: 8.209620714s
Oct 24 11:00:04.881: INFO: Pod "pod-subpath-test-configmap-dxlh": Phase="Running", Reason="", readiness=true. Elapsed: 10.24931924s
Oct 24 11:00:06.967: INFO: Pod "pod-subpath-test-configmap-dxlh": Phase="Running", Reason="", readiness=true. Elapsed: 12.336125976s
Oct 24 11:00:09.007: INFO: Pod "pod-subpath-test-configmap-dxlh": Phase="Running", Reason="", readiness=true. Elapsed: 14.375828572s
Oct 24 11:00:11.047: INFO: Pod "pod-subpath-test-configmap-dxlh": Phase="Running", Reason="", readiness=true. Elapsed: 16.415978452s
Oct 24 11:00:13.160: INFO: Pod "pod-subpath-test-configmap-dxlh": Phase="Running", Reason="", readiness=true. Elapsed: 18.528698384s
Oct 24 11:00:15.200: INFO: Pod "pod-subpath-test-configmap-dxlh": Phase="Running", Reason="", readiness=true. Elapsed: 20.5684961s
Oct 24 11:00:17.240: INFO: Pod "pod-subpath-test-configmap-dxlh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.608931891s
STEP: Saw pod success
Oct 24 11:00:17.240: INFO: Pod "pod-subpath-test-configmap-dxlh" satisfied condition "Succeeded or Failed"
Oct 24 11:00:17.279: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-subpath-test-configmap-dxlh container test-container-subpath-configmap-dxlh: <nil>
STEP: delete the pod
Oct 24 11:00:17.377: INFO: Waiting for pod pod-subpath-test-configmap-dxlh to disappear
Oct 24 11:00:17.416: INFO: Pod pod-subpath-test-configmap-dxlh no longer exists
STEP: Deleting pod pod-subpath-test-configmap-dxlh
Oct 24 11:00:17.416: INFO: Deleting pod "pod-subpath-test-configmap-dxlh" in namespace "subpath-3341"
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:00:17.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3341" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":306,"completed":220,"skipped":3536,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a volume subpath [sig-storage] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Oct 24 11:00:17.539: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test substitution in volume subpath
Oct 24 11:00:17.780: INFO: Waiting up to 5m0s for pod "var-expansion-d359cecc-f314-401e-8290-4f0297f1c7ea" in namespace "var-expansion-5348" to be "Succeeded or Failed"
Oct 24 11:00:17.820: INFO: Pod "var-expansion-d359cecc-f314-401e-8290-4f0297f1c7ea": Phase="Pending", Reason="", readiness=false. Elapsed: 39.188896ms
Oct 24 11:00:19.921: INFO: Pod "var-expansion-d359cecc-f314-401e-8290-4f0297f1c7ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.140242691s
STEP: Saw pod success
Oct 24 11:00:19.921: INFO: Pod "var-expansion-d359cecc-f314-401e-8290-4f0297f1c7ea" satisfied condition "Succeeded or Failed"
Oct 24 11:00:19.960: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod var-expansion-d359cecc-f314-401e-8290-4f0297f1c7ea container dapi-container: <nil>
STEP: delete the pod
Oct 24 11:00:20.057: INFO: Waiting for pod var-expansion-d359cecc-f314-401e-8290-4f0297f1c7ea to disappear
Oct 24 11:00:20.097: INFO: Pod var-expansion-d359cecc-f314-401e-8290-4f0297f1c7ea no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:00:20.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5348" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":306,"completed":221,"skipped":3555,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:00:26.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1477" for this suite.
STEP: Destroying namespace "webhook-1477-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":306,"completed":222,"skipped":3561,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 11 lines ...
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:00:51.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3511" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":306,"completed":223,"skipped":3572,"failed":0}
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath 
  runs ReplicaSets to verify preemption running path [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 32 lines ...
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:02:31.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-8120" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":306,"completed":224,"skipped":3582,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 14 lines ...
STEP: verifying the updated pod is in kubernetes
Oct 24 11:02:34.809: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:02:34.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8797" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":306,"completed":225,"skipped":3624,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] LimitRange
... skipping 31 lines ...
Oct 24 11:02:43.027: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:02:43.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-6645" for this suite.
•{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":306,"completed":226,"skipped":3638,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] 
  validates basic preemption works [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 17 lines ...
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:04:01.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-5008" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":306,"completed":227,"skipped":3654,"failed":0}
S
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Oct 24 11:04:04.154: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:04:04.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-915" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":306,"completed":228,"skipped":3655,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 17 lines ...
STEP: creating replication controller affinity-nodeport-timeout in namespace services-1268
I1024 11:04:07.871236  143945 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-1268, replica count: 3
I1024 11:04:10.971865  143945 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 24 11:04:11.179: INFO: Creating new exec pod
Oct 24 11:04:14.420: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-1268 exec execpod-affinityztsqt -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80'
Oct 24 11:04:16.032: INFO: rc: 1
Oct 24 11:04:16.032: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-1268 exec execpod-affinityztsqt -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-nodeport-timeout 80
nc: connect to affinity-nodeport-timeout port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 24 11:04:17.032: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-1268 exec execpod-affinityztsqt -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80'
Oct 24 11:04:18.658: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n"
Oct 24 11:04:18.658: INFO: stdout: ""
Oct 24 11:04:18.660: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-1268 exec execpod-affinityztsqt -- /bin/sh -x -c nc -zv -t -w 2 10.0.87.54 80'
... skipping 43 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:04:59.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1268" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":306,"completed":229,"skipped":3688,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should test the lifecycle of a ReplicationController [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] ReplicationController
... skipping 26 lines ...
STEP: deleting ReplicationControllers by collection
STEP: waiting for ReplicationController to have a DELETED watchEvent
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:05:15.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2595" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":306,"completed":230,"skipped":3710,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 24 11:05:15.653: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct 24 11:05:16.421: INFO: Waiting up to 5m0s for pod "pod-d94fb91b-8eac-49d3-b587-a63ece5ec4b2" in namespace "emptydir-2434" to be "Succeeded or Failed"
Oct 24 11:05:16.475: INFO: Pod "pod-d94fb91b-8eac-49d3-b587-a63ece5ec4b2": Phase="Pending", Reason="", readiness=false. Elapsed: 54.694484ms
Oct 24 11:05:18.515: INFO: Pod "pod-d94fb91b-8eac-49d3-b587-a63ece5ec4b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.094485695s
STEP: Saw pod success
Oct 24 11:05:18.515: INFO: Pod "pod-d94fb91b-8eac-49d3-b587-a63ece5ec4b2" satisfied condition "Succeeded or Failed"
Oct 24 11:05:18.555: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-d94fb91b-8eac-49d3-b587-a63ece5ec4b2 container test-container: <nil>
STEP: delete the pod
Oct 24 11:05:18.657: INFO: Waiting for pod pod-d94fb91b-8eac-49d3-b587-a63ece5ec4b2 to disappear
Oct 24 11:05:18.696: INFO: Pod pod-d94fb91b-8eac-49d3-b587-a63ece5ec4b2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:05:18.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2434" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":231,"skipped":3737,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Deployment
... skipping 26 lines ...
Oct 24 11:05:23.660: INFO: Pod "test-rolling-update-deployment-6b6bf9df46-m7t9k" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46-m7t9k test-rolling-update-deployment-6b6bf9df46- deployment-5656  6d6be76c-247f-4cc1-8356-2b4b09fb068b 19504 0 2020-10-24 11:05:21 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:6b6bf9df46] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-6b6bf9df46 f12597a2-78cd-417f-8d36-bb0abe3f3467 0xc0025051e7 0xc0025051e8}] []  [{kube-controller-manager Update v1 2020-10-24 11:05:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f12597a2-78cd-417f-8d36-bb0abe3f3467\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-24 11:05:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.1.222\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tss25,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tss25,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tss25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-vkx8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 11:05:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 11:05:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 11:05:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 11:05:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:10.64.1.222,StartTime:2020-10-24 11:05:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-24 11:05:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://c51c11fb2478970cb7facb0b6fd5cf4234c442cddad7e7272d979d92624bf228,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.1.222,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:05:23.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5656" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":306,"completed":232,"skipped":3749,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 15 lines ...
STEP: deleting the pod gracefully
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:05:37.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8803" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":306,"completed":233,"skipped":3761,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 24 11:05:37.989: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: creating the pod with failed condition
STEP: updating the pod
Oct 24 11:07:38.934: INFO: Successfully updated pod "var-expansion-3ddf8c73-0e12-4609-b353-09dbc9b9abf9"
STEP: waiting for pod running
STEP: deleting the pod gracefully
Oct 24 11:07:41.013: INFO: Deleting pod "var-expansion-3ddf8c73-0e12-4609-b353-09dbc9b9abf9" in namespace "var-expansion-2322"
Oct 24 11:07:41.054: INFO: Wait up to 5m0s for pod "var-expansion-3ddf8c73-0e12-4609-b353-09dbc9b9abf9" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:08:29.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2322" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":306,"completed":234,"skipped":3768,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 26 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:08:36.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5400" for this suite.
STEP: Destroying namespace "webhook-5400-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":306,"completed":235,"skipped":3777,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 36 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:08:46.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2548" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":306,"completed":236,"skipped":3780,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:08:50.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7485" for this suite.
STEP: Destroying namespace "webhook-7485-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":306,"completed":237,"skipped":3806,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 57 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:09:28.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4843" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":306,"completed":238,"skipped":3848,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 12 lines ...
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:09:28.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3943" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":306,"completed":239,"skipped":3870,"failed":0}

------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:09:52.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9986" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":306,"completed":240,"skipped":3870,"failed":0}

------------------------------
[k8s.io] Pods 
  should delete a collection of pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 13 lines ...
STEP: waiting for all 3 pods to be located
STEP: waiting for all pods to be deleted
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:09:52.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6807" for this suite.
•{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":306,"completed":241,"skipped":3870,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:09:57.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6542" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":306,"completed":242,"skipped":3911,"failed":0}
SSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 9 lines ...
STEP: creating pod
Oct 24 11:09:59.819: INFO: Pod pod-hostip-ac9df0bf-0a27-4edd-bd61-b5bb3360091d has hostIP: 10.138.0.3
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:09:59.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2158" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":306,"completed":243,"skipped":3917,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Deployment
... skipping 25 lines ...
Oct 24 11:10:03.390: INFO: Pod "test-recreate-deployment-f79dd4667-6zjzh" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-6zjzh test-recreate-deployment-f79dd4667- deployment-2849  4310549b-d786-4434-b58e-806471ed1c97 20526 0 2020-10-24 11:10:02 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 6bcffc90-1e63-49e9-8a07-77274dc5d7ee 0xc005b3e740 0xc005b3e741}] []  [{kube-controller-manager Update v1 2020-10-24 11:10:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bcffc90-1e63-49e9-8a07-77274dc5d7ee\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-24 11:10:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj8g5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj8g5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj8g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-vkx8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 11:10:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 11:10:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 11:10:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 11:10:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:,StartTime:2020-10-24 11:10:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:10:03.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2849" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":306,"completed":244,"skipped":3924,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Job
... skipping 12 lines ...
Oct 24 11:10:08.391: INFO: Terminating Job.batch foo pods took: 100.251515ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:10:48.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-515" for this suite.
•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":306,"completed":245,"skipped":3985,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:10:54.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4139" for this suite.
STEP: Destroying namespace "webhook-4139-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":306,"completed":246,"skipped":3993,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:11:05.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9606" for this suite.
STEP: Destroying namespace "webhook-9606-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":306,"completed":247,"skipped":3996,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...
Oct 24 11:11:09.714: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 24 11:11:10.008: INFO: Deleting pod test-dns-nameservers...
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:11:10.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4744" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":306,"completed":248,"skipped":4007,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Pods
... skipping 11 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Oct 24 11:11:13.466: INFO: Successfully updated pod "pod-update-activedeadlineseconds-71ac51c3-ef95-4c43-bfa8-ace6c36d4c0c"
Oct 24 11:11:13.466: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-71ac51c3-ef95-4c43-bfa8-ace6c36d4c0c" in namespace "pods-140" to be "terminated due to deadline exceeded"
Oct 24 11:11:13.510: INFO: Pod "pod-update-activedeadlineseconds-71ac51c3-ef95-4c43-bfa8-ace6c36d4c0c": Phase="Running", Reason="", readiness=true. Elapsed: 43.899083ms
Oct 24 11:11:15.556: INFO: Pod "pod-update-activedeadlineseconds-71ac51c3-ef95-4c43-bfa8-ace6c36d4c0c": Phase="Running", Reason="", readiness=true. Elapsed: 2.09032355s
Oct 24 11:11:17.596: INFO: Pod "pod-update-activedeadlineseconds-71ac51c3-ef95-4c43-bfa8-ace6c36d4c0c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.129865598s
Oct 24 11:11:17.596: INFO: Pod "pod-update-activedeadlineseconds-71ac51c3-ef95-4c43-bfa8-ace6c36d4c0c" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:11:17.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-140" for this suite.
•{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":306,"completed":249,"skipped":4050,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 9 lines ...
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:11:18.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1951" for this suite.
STEP: Destroying namespace "nspatchtest-55522baf-cfe0-47cf-840a-ccc2707c313f-2309" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":306,"completed":250,"skipped":4100,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:11:27.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-5095" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":306,"completed":251,"skipped":4129,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should print the output to logs [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:11:32.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-32" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":306,"completed":252,"skipped":4155,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] StatefulSet
... skipping 78 lines ...
Oct 24 11:15:57.781: INFO: Waiting for statefulset status.replicas updated to 0
Oct 24 11:15:57.828: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:15:58.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6777" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":306,"completed":253,"skipped":4167,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-94f91ce4-146f-4aea-8487-e808ede9e133
STEP: Creating a pod to test consume secrets
Oct 24 11:15:58.492: INFO: Waiting up to 5m0s for pod "pod-secrets-29019337-f833-404b-9332-6ffb2e03805f" in namespace "secrets-4552" to be "Succeeded or Failed"
Oct 24 11:15:58.531: INFO: Pod "pod-secrets-29019337-f833-404b-9332-6ffb2e03805f": Phase="Pending", Reason="", readiness=false. Elapsed: 39.612673ms
Oct 24 11:16:00.572: INFO: Pod "pod-secrets-29019337-f833-404b-9332-6ffb2e03805f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.080372295s
STEP: Saw pod success
Oct 24 11:16:00.572: INFO: Pod "pod-secrets-29019337-f833-404b-9332-6ffb2e03805f" satisfied condition "Succeeded or Failed"
Oct 24 11:16:00.614: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-secrets-29019337-f833-404b-9332-6ffb2e03805f container secret-volume-test: <nil>
STEP: delete the pod
Oct 24 11:16:00.724: INFO: Waiting for pod pod-secrets-29019337-f833-404b-9332-6ffb2e03805f to disappear
Oct 24 11:16:00.764: INFO: Pod pod-secrets-29019337-f833-404b-9332-6ffb2e03805f no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:16:00.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4552" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":254,"skipped":4190,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-d3f930fe-8d3a-4ac1-a00e-282527b069b2
STEP: Creating a pod to test consume secrets
Oct 24 11:16:01.335: INFO: Waiting up to 5m0s for pod "pod-secrets-70f2c952-c76f-4a86-98e8-535cd51930ab" in namespace "secrets-7059" to be "Succeeded or Failed"
Oct 24 11:16:01.374: INFO: Pod "pod-secrets-70f2c952-c76f-4a86-98e8-535cd51930ab": Phase="Pending", Reason="", readiness=false. Elapsed: 39.204818ms
Oct 24 11:16:03.415: INFO: Pod "pod-secrets-70f2c952-c76f-4a86-98e8-535cd51930ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.080641972s
STEP: Saw pod success
Oct 24 11:16:03.415: INFO: Pod "pod-secrets-70f2c952-c76f-4a86-98e8-535cd51930ab" satisfied condition "Succeeded or Failed"
Oct 24 11:16:03.477: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-secrets-70f2c952-c76f-4a86-98e8-535cd51930ab container secret-volume-test: <nil>
STEP: delete the pod
Oct 24 11:16:03.686: INFO: Waiting for pod pod-secrets-70f2c952-c76f-4a86-98e8-535cd51930ab to disappear
Oct 24 11:16:03.755: INFO: Pod pod-secrets-70f2c952-c76f-4a86-98e8-535cd51930ab no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:16:03.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7059" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":306,"completed":255,"skipped":4216,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Watchers
... skipping 7 lines ...
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:16:13.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8135" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":306,"completed":256,"skipped":4243,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 15 lines ...
Oct 24 11:16:20.183: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:16:33.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3298" for this suite.
STEP: Destroying namespace "webhook-3298-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":306,"completed":257,"skipped":4253,"failed":0}
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] StatefulSet
... skipping 12 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-3316
STEP: Creating statefulset with conflicting port in namespace statefulset-3316
STEP: Waiting until pod test-pod will start running in namespace statefulset-3316
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3316
Oct 24 11:16:38.347: INFO: Observed stateful pod in namespace: statefulset-3316, name: ss-0, uid: 662f25c2-3743-4d9b-8a18-5c2a8f1c29ea, status phase: Pending. Waiting for statefulset controller to delete.
Oct 24 11:16:38.536: INFO: Observed stateful pod in namespace: statefulset-3316, name: ss-0, uid: 662f25c2-3743-4d9b-8a18-5c2a8f1c29ea, status phase: Failed. Waiting for statefulset controller to delete.
Oct 24 11:16:38.642: INFO: Observed stateful pod in namespace: statefulset-3316, name: ss-0, uid: 662f25c2-3743-4d9b-8a18-5c2a8f1c29ea, status phase: Failed. Waiting for statefulset controller to delete.
Oct 24 11:16:38.668: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3316
STEP: Removing pod with conflicting port in namespace statefulset-3316
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3316 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114
Oct 24 11:16:42.891: INFO: Deleting all statefulset in ns statefulset-3316
Oct 24 11:16:42.930: INFO: Scaling statefulset ss to 0
Oct 24 11:16:53.095: INFO: Waiting for statefulset status.replicas updated to 0
Oct 24 11:16:53.134: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:16:53.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3316" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":306,"completed":258,"skipped":4256,"failed":0}

------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 162 lines ...
Oct 24 11:16:57.263: INFO: stdout: "deployment.apps/agnhost-replica created\n"
STEP: validating guestbook app
Oct 24 11:16:57.263: INFO: Waiting for all frontend pods to be Running.
Oct 24 11:17:02.364: INFO: Waiting for frontend to serve content.
Oct 24 11:17:02.414: INFO: Trying to add a new entry to the guestbook.
Oct 24 11:17:02.459: INFO: Verifying that added entry can be retrieved.
Oct 24 11:17:02.514: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}
STEP: using delete to clean up resources
Oct 24 11:17:07.594: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=kubectl-8190 delete --grace-period=0 --force -f -'
Oct 24 11:17:07.954: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct 24 11:17:07.954: INFO: stdout: "service \"agnhost-replica\" force deleted\n"
STEP: using delete to clean up resources
Oct 24 11:17:07.955: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=kubectl-8190 delete --grace-period=0 --force -f -'
... skipping 16 lines ...
Oct 24 11:17:09.277: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct 24 11:17:09.277: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:17:09.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8190" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":306,"completed":259,"skipped":4256,"failed":0}
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Networking
... skipping 37 lines ...
Oct 24 11:17:37.426: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 24 11:17:38.725: INFO: Found all 1 expected endpoints: [netserver-2]
[AfterEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:17:38.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6738" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":260,"skipped":4258,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 11 lines ...
STEP: creating replication controller externalname-service in namespace services-5516
I1024 11:17:39.634834  143945 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5516, replica count: 2
Oct 24 11:17:42.735: INFO: Creating new exec pod
I1024 11:17:42.735410  143945 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 24 11:17:45.927: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-5516 exec execpodmszz6 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Oct 24 11:17:47.622: INFO: rc: 1
Oct 24 11:17:47.622: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-5516 exec execpodmszz6 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 24 11:17:48.622: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-5516 exec execpodmszz6 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Oct 24 11:17:50.119: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Oct 24 11:17:50.119: INFO: stdout: ""
Oct 24 11:17:50.120: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-5516 exec execpodmszz6 -- /bin/sh -x -c nc -zv -t -w 2 10.0.171.9 80'
... skipping 3 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:17:50.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5516" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":306,"completed":261,"skipped":4269,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 26 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:17:58.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2523" for this suite.
STEP: Destroying namespace "webhook-2523-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":306,"completed":262,"skipped":4287,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-c058c411-a4c7-46bf-94f6-24952975c405
STEP: Creating a pod to test consume secrets
Oct 24 11:17:58.731: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c440d932-29ab-42c9-9510-0f68d02fda55" in namespace "projected-3997" to be "Succeeded or Failed"
Oct 24 11:17:58.770: INFO: Pod "pod-projected-secrets-c440d932-29ab-42c9-9510-0f68d02fda55": Phase="Pending", Reason="", readiness=false. Elapsed: 39.103052ms
Oct 24 11:18:00.954: INFO: Pod "pod-projected-secrets-c440d932-29ab-42c9-9510-0f68d02fda55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.222301302s
STEP: Saw pod success
Oct 24 11:18:00.954: INFO: Pod "pod-projected-secrets-c440d932-29ab-42c9-9510-0f68d02fda55" satisfied condition "Succeeded or Failed"
Oct 24 11:18:01.144: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-projected-secrets-c440d932-29ab-42c9-9510-0f68d02fda55 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 24 11:18:01.834: INFO: Waiting for pod pod-projected-secrets-c440d932-29ab-42c9-9510-0f68d02fda55 to disappear
Oct 24 11:18:01.955: INFO: Pod pod-projected-secrets-c440d932-29ab-42c9-9510-0f68d02fda55 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:18:01.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3997" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":263,"skipped":4297,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-e1a85654-5f90-49fb-b249-db08e4f8c7ea
STEP: Creating a pod to test consume configMaps
Oct 24 11:18:03.066: INFO: Waiting up to 5m0s for pod "pod-configmaps-fa8497b5-5614-4d65-a251-d5b4120c2434" in namespace "configmap-4170" to be "Succeeded or Failed"
Oct 24 11:18:03.107: INFO: Pod "pod-configmaps-fa8497b5-5614-4d65-a251-d5b4120c2434": Phase="Pending", Reason="", readiness=false. Elapsed: 41.073978ms
Oct 24 11:18:05.187: INFO: Pod "pod-configmaps-fa8497b5-5614-4d65-a251-d5b4120c2434": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.120772345s
STEP: Saw pod success
Oct 24 11:18:05.187: INFO: Pod "pod-configmaps-fa8497b5-5614-4d65-a251-d5b4120c2434" satisfied condition "Succeeded or Failed"
Oct 24 11:18:05.230: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-configmaps-fa8497b5-5614-4d65-a251-d5b4120c2434 container configmap-volume-test: <nil>
STEP: delete the pod
Oct 24 11:18:05.327: INFO: Waiting for pod pod-configmaps-fa8497b5-5614-4d65-a251-d5b4120c2434 to disappear
Oct 24 11:18:05.367: INFO: Pod pod-configmaps-fa8497b5-5614-4d65-a251-d5b4120c2434 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:18:05.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4170" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":264,"skipped":4316,"failed":0}
SSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Runtime
... skipping 20 lines ...
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:18:30.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7348" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":306,"completed":265,"skipped":4321,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:18:44.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3172" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":306,"completed":266,"skipped":4340,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:18:51.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-7203" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":306,"completed":267,"skipped":4358,"failed":0}
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-downwardapi-fsb6
STEP: Creating a pod to test atomic-volume-subpath
Oct 24 11:18:53.221: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-fsb6" in namespace "subpath-8592" to be "Succeeded or Failed"
Oct 24 11:18:53.283: INFO: Pod "pod-subpath-test-downwardapi-fsb6": Phase="Pending", Reason="", readiness=false. Elapsed: 62.168076ms
Oct 24 11:18:55.471: INFO: Pod "pod-subpath-test-downwardapi-fsb6": Phase="Running", Reason="", readiness=true. Elapsed: 2.249931457s
Oct 24 11:18:57.511: INFO: Pod "pod-subpath-test-downwardapi-fsb6": Phase="Running", Reason="", readiness=true. Elapsed: 4.289952906s
Oct 24 11:18:59.552: INFO: Pod "pod-subpath-test-downwardapi-fsb6": Phase="Running", Reason="", readiness=true. Elapsed: 6.3316683s
Oct 24 11:19:01.613: INFO: Pod "pod-subpath-test-downwardapi-fsb6": Phase="Running", Reason="", readiness=true. Elapsed: 8.391891459s
Oct 24 11:19:03.652: INFO: Pod "pod-subpath-test-downwardapi-fsb6": Phase="Running", Reason="", readiness=true. Elapsed: 10.431712659s
Oct 24 11:19:05.693: INFO: Pod "pod-subpath-test-downwardapi-fsb6": Phase="Running", Reason="", readiness=true. Elapsed: 12.471956982s
Oct 24 11:19:07.746: INFO: Pod "pod-subpath-test-downwardapi-fsb6": Phase="Running", Reason="", readiness=true. Elapsed: 14.525645725s
Oct 24 11:19:09.788: INFO: Pod "pod-subpath-test-downwardapi-fsb6": Phase="Running", Reason="", readiness=true. Elapsed: 16.567541728s
Oct 24 11:19:11.946: INFO: Pod "pod-subpath-test-downwardapi-fsb6": Phase="Running", Reason="", readiness=true. Elapsed: 18.725480037s
Oct 24 11:19:13.988: INFO: Pod "pod-subpath-test-downwardapi-fsb6": Phase="Running", Reason="", readiness=true. Elapsed: 20.76737455s
Oct 24 11:19:16.032: INFO: Pod "pod-subpath-test-downwardapi-fsb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.811600645s
STEP: Saw pod success
Oct 24 11:19:16.032: INFO: Pod "pod-subpath-test-downwardapi-fsb6" satisfied condition "Succeeded or Failed"
Oct 24 11:19:16.077: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-subpath-test-downwardapi-fsb6 container test-container-subpath-downwardapi-fsb6: <nil>
STEP: delete the pod
Oct 24 11:19:16.171: INFO: Waiting for pod pod-subpath-test-downwardapi-fsb6 to disappear
Oct 24 11:19:16.211: INFO: Pod pod-subpath-test-downwardapi-fsb6 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-fsb6
Oct 24 11:19:16.211: INFO: Deleting pod "pod-subpath-test-downwardapi-fsb6" in namespace "subpath-8592"
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:19:16.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8592" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":306,"completed":268,"skipped":4362,"failed":0}

------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-map-08d3ce61-d219-4d60-952e-d49fec5b5ab6
STEP: Creating a pod to test consume configMaps
Oct 24 11:19:16.627: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4c3bd75b-6b2c-4aed-8eb5-5328f01c6457" in namespace "projected-3793" to be "Succeeded or Failed"
Oct 24 11:19:16.667: INFO: Pod "pod-projected-configmaps-4c3bd75b-6b2c-4aed-8eb5-5328f01c6457": Phase="Pending", Reason="", readiness=false. Elapsed: 39.714042ms
Oct 24 11:19:18.707: INFO: Pod "pod-projected-configmaps-4c3bd75b-6b2c-4aed-8eb5-5328f01c6457": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079930362s
Oct 24 11:19:20.751: INFO: Pod "pod-projected-configmaps-4c3bd75b-6b2c-4aed-8eb5-5328f01c6457": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.12425157s
STEP: Saw pod success
Oct 24 11:19:20.752: INFO: Pod "pod-projected-configmaps-4c3bd75b-6b2c-4aed-8eb5-5328f01c6457" satisfied condition "Succeeded or Failed"
Oct 24 11:19:20.794: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-projected-configmaps-4c3bd75b-6b2c-4aed-8eb5-5328f01c6457 container agnhost-container: <nil>
STEP: delete the pod
Oct 24 11:19:20.917: INFO: Waiting for pod pod-projected-configmaps-4c3bd75b-6b2c-4aed-8eb5-5328f01c6457 to disappear
Oct 24 11:19:20.962: INFO: Pod pod-projected-configmaps-4c3bd75b-6b2c-4aed-8eb5-5328f01c6457 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:19:20.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3793" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":306,"completed":269,"skipped":4362,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] [sig-node] PreStop
... skipping 25 lines ...
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:19:30.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-2455" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":306,"completed":270,"skipped":4380,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Oct 24 11:19:31.188: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 24 11:19:39.585: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:20:01.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7271" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":306,"completed":271,"skipped":4384,"failed":0}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 40 lines ...
Oct 24 11:20:14.610: INFO: Deleting pod "simpletest-rc-to-be-deleted-bhhxf" in namespace "gc-7076"
Oct 24 11:20:14.663: INFO: Deleting pod "simpletest-rc-to-be-deleted-bhwcs" in namespace "gc-7076"
[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:20:14.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7076" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":306,"completed":272,"skipped":4386,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 11 lines ...
STEP: creating replication controller affinity-clusterip-transition in namespace services-369
I1024 11:20:15.099409  143945 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-369, replica count: 3
I1024 11:20:18.151255  143945 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 24 11:20:18.234: INFO: Creating new exec pod
Oct 24 11:20:21.527: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-369 exec execpod-affinityvgd52 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80'
Oct 24 11:20:23.195: INFO: rc: 1
Oct 24 11:20:23.195: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-369 exec execpod-affinityvgd52 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ nc -zv -t -w 2 affinity-clusterip-transition 80
nc: connect to affinity-clusterip-transition port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 24 11:20:24.195: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-369 exec execpod-affinityvgd52 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80'
Oct 24 11:20:25.945: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n"
Oct 24 11:20:25.945: INFO: stdout: ""
Oct 24 11:20:25.946: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=services-369 exec execpod-affinityvgd52 -- /bin/sh -x -c nc -zv -t -w 2 10.0.189.135 80'
... skipping 63 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:21:28.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-369" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":306,"completed":273,"skipped":4404,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Oct 24 11:21:28.149: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: creating the pod
Oct 24 11:21:28.381: INFO: PodSpec: initContainers in spec.initContainers
Oct 24 11:22:15.278: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b5836c0d-29f8-468f-9847-621c2028cbf9", GenerateName:"", Namespace:"init-container-2719", SelfLink:"", UID:"7baca209-9768-45a5-accc-fb19844a95b2", ResourceVersion:"23540", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63739135288, loc:(*time.Location)(0x77697a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"381286926"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004b00040), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004b00060)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004b00080), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004b000a0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6cf2x", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc007a50000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6cf2x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6cf2x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6cf2x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004b12098), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"bootstrap-e2e-minion-group-vkx8", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0004b5500), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004b12110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004b12130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004b12138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004b1213c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc004618020), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739135288, loc:(*time.Location)(0x77697a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739135288, loc:(*time.Location)(0x77697a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739135288, loc:(*time.Location)(0x77697a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739135288, loc:(*time.Location)(0x77697a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.3", PodIP:"10.64.1.21", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.64.1.21"}}, StartTime:(*v1.Time)(0xc004b000c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0004b55e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0004b56c0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://f2c7890ff2f564424e235c020f6748c5637d71eb8a1a44a164d406c3cfc84009", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004b00100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004b000e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc004b121bf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:22:15.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2719" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":306,"completed":274,"skipped":4428,"failed":0}
SSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected combined
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-projected-all-test-volume-752fd74a-794b-4ddd-bdd7-427ba9a6e5eb
STEP: Creating secret with name secret-projected-all-test-volume-66d1c687-45cf-4b27-b750-164f5359d03a
STEP: Creating a pod to test Check all projections for projected volume plugin
Oct 24 11:22:15.686: INFO: Waiting up to 5m0s for pod "projected-volume-bda66a14-aea6-4721-ae43-41e88474fd6c" in namespace "projected-6514" to be "Succeeded or Failed"
Oct 24 11:22:15.731: INFO: Pod "projected-volume-bda66a14-aea6-4721-ae43-41e88474fd6c": Phase="Pending", Reason="", readiness=false. Elapsed: 44.290077ms
Oct 24 11:22:17.772: INFO: Pod "projected-volume-bda66a14-aea6-4721-ae43-41e88474fd6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.085445927s
STEP: Saw pod success
Oct 24 11:22:17.772: INFO: Pod "projected-volume-bda66a14-aea6-4721-ae43-41e88474fd6c" satisfied condition "Succeeded or Failed"
Oct 24 11:22:17.811: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod projected-volume-bda66a14-aea6-4721-ae43-41e88474fd6c container projected-all-volume-test: <nil>
STEP: delete the pod
Oct 24 11:22:17.960: INFO: Waiting for pod projected-volume-bda66a14-aea6-4721-ae43-41e88474fd6c to disappear
Oct 24 11:22:18.000: INFO: Pod projected-volume-bda66a14-aea6-4721-ae43-41e88474fd6c no longer exists
[AfterEach] [sig-storage] Projected combined
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:22:18.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6514" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":306,"completed":275,"skipped":4432,"failed":0}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] StatefulSet
... skipping 40 lines ...
Oct 24 11:24:40.535: INFO: Waiting for statefulset status.replicas updated to 0
Oct 24 11:24:40.575: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:24:40.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5206" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":306,"completed":276,"skipped":4437,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Oct 24 11:24:41.396: INFO: stderr: ""
Oct 24 11:24:41.396: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"20+\", GitVersion:\"v1.20.0-alpha.3.114+5935fcd704fe89\", GitCommit:\"5935fcd704fe89048776d02cf1ef4f939743c042\", GitTreeState:\"clean\", BuildDate:\"2020-10-24T03:47:00Z\", GoVersion:\"go1.15.2\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"20+\", GitVersion:\"v1.20.0-alpha.3.114+5935fcd704fe89\", GitCommit:\"5935fcd704fe89048776d02cf1ef4f939743c042\", GitTreeState:\"clean\", BuildDate:\"2020-10-24T03:47:00Z\", GoVersion:\"go1.15.2\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:24:41.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5740" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":306,"completed":277,"skipped":4463,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-apps] Deployment
... skipping 20 lines ...
Oct 24 11:24:48.743: INFO: Pod "test-cleanup-deployment-685c4f8568-9kjl8" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-685c4f8568-9kjl8 test-cleanup-deployment-685c4f8568- deployment-7236  c1a7bfd7-eb20-4625-a91c-a8949b47ccd2 24067 0 2020-10-24 11:24:44 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:685c4f8568] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-685c4f8568 2c0622b8-b623-442b-ae30-32060b7d257a 0xc004b13067 0xc004b13068}] []  [{kube-controller-manager Update v1 2020-10-24 11:24:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0622b8-b623-442b-ae30-32060b7d257a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-24 11:24:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.1.27\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p2p9r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p2p9r,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p2p9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-vkx8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 11:24:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 11:24:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 11:24:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-24 11:24:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.3,PodIP:10.64.1.27,StartTime:2020-10-24 11:24:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-24 11:24:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://a11f4ad7179c8b287631257af89321fabae684d44041ed042b8d6e5e9fd1bfaa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.1.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:24:48.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7236" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":306,"completed":278,"skipped":4470,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-8c273d25-6708-40a1-8f39-c20296ecddc2
STEP: Creating a pod to test consume secrets
Oct 24 11:24:49.119: INFO: Waiting up to 5m0s for pod "pod-secrets-d1ad48cf-de55-4968-b78b-894f286d2232" in namespace "secrets-7251" to be "Succeeded or Failed"
Oct 24 11:24:49.159: INFO: Pod "pod-secrets-d1ad48cf-de55-4968-b78b-894f286d2232": Phase="Pending", Reason="", readiness=false. Elapsed: 40.240658ms
Oct 24 11:24:51.203: INFO: Pod "pod-secrets-d1ad48cf-de55-4968-b78b-894f286d2232": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.084263346s
STEP: Saw pod success
Oct 24 11:24:51.203: INFO: Pod "pod-secrets-d1ad48cf-de55-4968-b78b-894f286d2232" satisfied condition "Succeeded or Failed"
Oct 24 11:24:51.243: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-secrets-d1ad48cf-de55-4968-b78b-894f286d2232 container secret-volume-test: <nil>
STEP: delete the pod
Oct 24 11:24:51.360: INFO: Waiting for pod pod-secrets-d1ad48cf-de55-4968-b78b-894f286d2232 to disappear
Oct 24 11:24:51.403: INFO: Pod pod-secrets-d1ad48cf-de55-4968-b78b-894f286d2232 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:24:51.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7251" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":306,"completed":279,"skipped":4479,"failed":0}
SSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Probing container
... skipping 18 lines ...
Oct 24 11:25:09.825: INFO: The status of Pod test-webserver-492bca7a-7c0e-46be-8594-1b7de551d82e is Running (Ready = true)
Oct 24 11:25:09.866: INFO: Container started at 2020-10-24 11:24:52 +0000 UTC, pod became ready at 2020-10-24 11:25:07 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:25:09.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1426" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":306,"completed":280,"skipped":4484,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-instrumentation] Events API 
  should delete a collection of events [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-instrumentation] Events API
... skipping 12 lines ...
Oct 24 11:25:10.330: INFO: requesting DeleteCollection of events
STEP: check that the list of events matches the requested quantity
[AfterEach] [sig-instrumentation] Events API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:25:10.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3243" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":306,"completed":281,"skipped":4498,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] PodTemplates 
  should run the lifecycle of PodTemplates [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-node] PodTemplates
... skipping 5 lines ...
[It] should run the lifecycle of PodTemplates [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [sig-node] PodTemplates
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:25:11.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-2916" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":306,"completed":282,"skipped":4534,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Networking
... skipping 36 lines ...
Oct 24 11:25:34.899: INFO: >>> kubeConfig: /workspace/.kube/config
Oct 24 11:25:35.174: INFO: Found all 1 expected endpoints: [netserver-2]
[AfterEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:25:35.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4572" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":283,"skipped":4549,"failed":0}
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-map-ee9e8ad9-5ba3-44a1-95d5-e7c0d3253330
STEP: Creating a pod to test consume configMaps
Oct 24 11:25:35.555: INFO: Waiting up to 5m0s for pod "pod-configmaps-37cc4566-cf65-4819-a778-3ffd7f21ea6a" in namespace "configmap-3722" to be "Succeeded or Failed"
Oct 24 11:25:35.598: INFO: Pod "pod-configmaps-37cc4566-cf65-4819-a778-3ffd7f21ea6a": Phase="Pending", Reason="", readiness=false. Elapsed: 42.695515ms
Oct 24 11:25:37.768: INFO: Pod "pod-configmaps-37cc4566-cf65-4819-a778-3ffd7f21ea6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.212618457s
STEP: Saw pod success
Oct 24 11:25:37.768: INFO: Pod "pod-configmaps-37cc4566-cf65-4819-a778-3ffd7f21ea6a" satisfied condition "Succeeded or Failed"
Oct 24 11:25:37.838: INFO: Trying to get logs from node bootstrap-e2e-minion-group-g27b pod pod-configmaps-37cc4566-cf65-4819-a778-3ffd7f21ea6a container configmap-volume-test: <nil>
STEP: delete the pod
Oct 24 11:25:38.222: INFO: Waiting for pod pod-configmaps-37cc4566-cf65-4819-a778-3ffd7f21ea6a to disappear
Oct 24 11:25:38.266: INFO: Pod pod-configmaps-37cc4566-cf65-4819-a778-3ffd7f21ea6a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:25:38.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3722" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":306,"completed":284,"skipped":4553,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl server-side dry-run 
  should check if kubectl can dry-run update Pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 13 lines ...
STEP: replace the image in the pod with server-side dry-run
Oct 24 11:25:38.846: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3871 get pod e2e-test-httpd-pod -o json'
Oct 24 11:25:39.123: INFO: stderr: ""
Oct 24 11:25:39.123: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-10-24T11:25:38Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"managedFields\": [\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:labels\": {\n                            \".\": {},\n                            \"f:run\": {}\n                        }\n                    },\n                    \"f:spec\": {\n                        \"f:containers\": {\n                            \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n                                \".\": {},\n                                \"f:image\": {},\n                                \"f:imagePullPolicy\": {},\n                                \"f:name\": {},\n                                \"f:resources\": {},\n                                \"f:terminationMessagePath\": {},\n                                \"f:terminationMessagePolicy\": {}\n                            }\n                        },\n                        \"f:dnsPolicy\": {},\n                        \"f:enableServiceLinks\": {},\n                        \"f:restartPolicy\": {},\n                        \"f:schedulerName\": {},\n                        \"f:securityContext\": {},\n                        \"f:terminationGracePeriodSeconds\": {}\n                    }\n                },\n                \"manager\": \"kubectl-run\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-10-24T11:25:38Z\"\n            },\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:status\": {\n                        \"f:conditions\": {\n                            \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:message\": {},\n                                \"f:reason\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:message\": {},\n                                \"f:reason\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            }\n                        },\n                        \"f:containerStatuses\": {},\n                        \"f:hostIP\": {},\n                        \"f:startTime\": {}\n                    }\n                },\n                \"manager\": \"kubelet\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-10-24T11:25:38Z\"\n            }\n        ],\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-3871\",\n        \"resourceVersion\": \"24307\",\n        \"uid\": \"3f36c871-6f13-44dd-9e1f-b88727751317\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-vkcgd\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"bootstrap-e2e-minion-group-g27b\",\n        \"preemptionPolicy\": \"PreemptLowerPriority\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-vkcgd\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-vkcgd\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-10-24T11:25:38Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-10-24T11:25:38Z\",\n                \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n                \"reason\": \"ContainersNotReady\",\n                \"status\": \"False\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-10-24T11:25:38Z\",\n                \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n                \"reason\": \"ContainersNotReady\",\n                \"status\": \"False\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-10-24T11:25:38Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": false,\n                \"restartCount\": 0,\n                \"started\": false,\n                \"state\": {\n                    \"waiting\": {\n                        \"reason\": \"ContainerCreating\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.138.0.4\",\n        \"phase\": \"Pending\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-10-24T11:25:38Z\"\n    }\n}\n"
Oct 24 11:25:39.123: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3871 replace -f - --dry-run=server'
Oct 24 11:25:39.862: INFO: rc: 1
Oct 24 11:25:39.863: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3871 replace -f - --dry-run=server:\nCommand stdout:\n\nstderr:\nError from server (Conflict): error when replacing \"STDIN\": Operation cannot be fulfilled on pods \"e2e-test-httpd-pod\": the object has been modified; please apply your changes to the latest version and try again\n\nerror:\nexit status 1",
        },
        Code: 1,
    }
    error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3871 replace -f - --dry-run=server:
    Command stdout:
    
    stderr:
    Error from server (Conflict): error when replacing "STDIN": Operation cannot be fulfilled on pods "e2e-test-httpd-pod": the object has been modified; please apply your changes to the latest version and try again
    
    error:
    exit status 1
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.KubectlBuilder.ExecOrDie(0xc002ce34a0, 0x0, 0xc004d970b0, 0xc, 0x4, 0xc0037799c0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:598 +0xbf
... skipping 19 lines ...
Oct 24 11:25:39.903: INFO: At 2020-10-24 11:25:39 +0000 UTC - event for e2e-test-httpd-pod: {kubelet bootstrap-e2e-minion-group-g27b} Started: Started container e2e-test-httpd-pod
Oct 24 11:25:39.942: INFO: POD                 NODE                             PHASE    GRACE  CONDITIONS
Oct 24 11:25:39.942: INFO: e2e-test-httpd-pod  bootstrap-e2e-minion-group-g27b  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-24 11:25:38 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-10-24 11:25:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-10-24 11:25:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-24 11:25:38 +0000 UTC  }]
Oct 24 11:25:39.942: INFO: 
Oct 24 11:25:39.988: INFO: 
Logging node info for node bootstrap-e2e-master
Oct 24 11:25:40.033: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master    05e8895a-a3df-4a9a-a8fc-5efa938de78a 23516 0 2020-10-24 09:41:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2020-10-24 09:41:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/instance-type":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:config":{},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kube-controller-manager Update v1 2020-10-24 09:41:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}}}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://gce-up-c1-4-glat-up-clu/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3866808320 0} {<nil>} 3776180Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3604664320 0} {<nil>} 3520180Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-10-24 09:41:20 +0000 UTC,LastTransitionTime:2020-10-24 09:41:20 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-10-24 11:22:03 +0000 UTC,LastTransitionTime:2020-10-24 09:41:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-10-24 11:22:03 +0000 UTC,LastTransitionTime:2020-10-24 09:41:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-10-24 11:22:03 +0000 UTC,LastTransitionTime:2020-10-24 09:41:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-10-24 11:22:03 +0000 UTC,LastTransitionTime:2020-10-24 09:41:17 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.105.36.219,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.gce-up-c1-4-glat-up-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.gce-up-c1-4-glat-up-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e94e20813cf16284af0ca82ca7916a34,SystemUUID:e94e2081-3cf1-6284-af0c-a82ca7916a34,BootID:11a814dc-deba-4a5a-8af4-b5db11f69dc3,KernelVersion:5.4.49+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.1,KubeletVersion:v1.20.0-alpha.3.114+5935fcd704fe89,KubeProxyVersion:v1.20.0-alpha.3.114+5935fcd704fe89,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.20.0-alpha.3.114_5935fcd704fe89],SizeBytes:171109681,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.20.0-alpha.3.114_5935fcd704fe89],SizeBytes:162053965,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.20.0-alpha.3.114_5935fcd704fe89],SizeBytes:69550394,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager@sha256:c0ed56727cd78700034f2f863d774412c78681fb6535456f5e5c420f4248c5a1 k8s.gcr.io/kube-addon-manager:v9.1.1],SizeBytes:30515541,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-glbc-amd64@sha256:1a859d138b4874642e9a8709e7ab04324669c77742349b5b21b1ef8a25fef55f k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1],SizeBytes:26526716,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:9515805,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Oct 24 11:25:40.033: INFO: 
Logging kubelet events for node bootstrap-e2e-master
Oct 24 11:25:40.072: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-master
Oct 24 11:25:40.176: INFO: kube-controller-manager-bootstrap-e2e-master started at 2020-10-24 09:39:46 +0000 UTC (0+1 container statuses recorded)
Oct 24 11:25:40.176: INFO: 	Container kube-controller-manager ready: true, restart count 0
... skipping 14 lines ...
Oct 24 11:25:40.176: INFO: 	Container kube-apiserver ready: true, restart count 1
W1024 11:25:40.224788  143945 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 24 11:25:40.511: INFO: 
Latency metrics for node bootstrap-e2e-master
Oct 24 11:25:40.511: INFO: 
Logging node info for node bootstrap-e2e-minion-group-bf58
Oct 24 11:25:40.557: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-bf58    b860c9e1-4371-444e-a4d7-57e20d5590f3 23636 0 2020-10-24 09:41:19 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-bf58 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{node-problem-detector Update v1 2020-10-24 09:41:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2020-10-24 09:41:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}}} {e2e.test Update v1 2020-10-24 11:03:44 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2020-10-24 11:03:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/instance-type":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:config":{},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://gce-up-c1-4-glat-up-clu/us-west1-b/bootstrap-e2e-minion-group-bf58,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7823917056 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {<nil>} 3 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7561773056 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {<nil>} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-10-24 11:21:32 +0000 UTC,LastTransitionTime:2020-10-24 09:41:20 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-10-24 11:21:32 +0000 UTC,LastTransitionTime:2020-10-24 09:41:20 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-10-24 11:21:32 +0000 UTC,LastTransitionTime:2020-10-24 09:41:20 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-10-24 11:21:32 +0000 UTC,LastTransitionTime:2020-10-24 09:41:20 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-10-24 11:21:32 +0000 UTC,LastTransitionTime:2020-10-24 09:41:20 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-10-24 11:21:32 +0000 UTC,LastTransitionTime:2020-10-24 09:41:20 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-10-24 11:21:32 +0000 UTC,LastTransitionTime:2020-10-24 09:41:20 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-10-24 09:41:33 +0000 UTC,LastTransitionTime:2020-10-24 09:41:33 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-10-24 11:22:28 +0000 UTC,LastTransitionTime:2020-10-24 09:41:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-10-24 11:22:28 +0000 UTC,LastTransitionTime:2020-10-24 09:41:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-10-24 11:22:28 +0000 UTC,LastTransitionTime:2020-10-24 09:41:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-10-24 11:22:28 +0000 UTC,LastTransitionTime:2020-10-24 09:41:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.227.138.8,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-bf58.c.gce-up-c1-4-glat-up-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-bf58.c.gce-up-c1-4-glat-up-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7bf919415d4f211d89db9157299f4ea0,SystemUUID:7bf91941-5d4f-211d-89db-9157299f4ea0,BootID:ce407867-666c-4394-aed7-c7ba3e056a59,KernelVersion:5.4.49+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.1,KubeletVersion:v1.20.0-alpha.3.114+5935fcd704fe89,KubeProxyVersion:v1.20.0-alpha.3.114+5935fcd704fe89,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.20.0-alpha.3.114_5935fcd704fe89],SizeBytes:140129137,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/sig-storage/snapshot-controller@sha256:36ca32433c069246ea8988a7b3dbdf0aabf8345be9122b8a25426e6c487878de k8s.gcr.io/sig-storage/snapshot-controller:v3.0.0],SizeBytes:17462937,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:13982350,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6],SizeBytes:10542830,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:9515805,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:35745de3c9a2884d53ad0e81b39f1eed9a7c77f5f909b9e84f9712b37ffb3021 k8s.gcr.io/addon-resizer:1.8.11],SizeBytes:9347950,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64@sha256:d83d8a481145d0eb71f8bd71ae236d1c6a931dd3bdcaf80919a8ec4a4d8aff74 k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0],SizeBytes:6362391,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Oct 24 11:25:40.557: INFO: 
Logging kubelet events for node bootstrap-e2e-minion-group-bf58
Oct 24 11:25:40.599: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-bf58
Oct 24 11:25:40.671: INFO: coredns-6954c77b9b-5zbck started at 2020-10-24 10:02:09 +0000 UTC (0+1 container statuses recorded)
Oct 24 11:25:40.671: INFO: 	Container coredns ready: true, restart count 0
... skipping 11 lines ...
Oct 24 11:25:40.671: INFO: 	Container kube-proxy ready: true, restart count 0
W1024 11:25:40.718483  143945 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 24 11:25:40.870: INFO: 
Latency metrics for node bootstrap-e2e-minion-group-bf58
Oct 24 11:25:40.870: INFO: 
Logging node info for node bootstrap-e2e-minion-group-g27b
Oct 24 11:25:40.912: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-g27b    32504c88-ea88-49ac-81a6-d828cab1b20d 23749 0 2020-10-24 09:41:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-g27b kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{node-problem-detector Update v1 2020-10-24 09:41:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2020-10-24 09:41:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}}} {e2e.test Update v1 2020-10-24 11:03:44 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2020-10-24 11:03:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/instance-type":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:config":{},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://gce-up-c1-4-glat-up-clu/us-west1-b/bootstrap-e2e-minion-group-g27b,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7823925248 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {<nil>} 3 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7561781248 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {<nil>} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-10-24 11:21:30 +0000 UTC,LastTransitionTime:2020-10-24 09:41:20 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-10-24 11:21:30 +0000 UTC,LastTransitionTime:2020-10-24 09:41:20 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-10-24 11:21:30 +0000 UTC,LastTransitionTime:2020-10-24 09:41:20 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-10-24 11:21:30 +0000 UTC,LastTransitionTime:2020-10-24 09:41:20 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-10-24 11:21:30 +0000 UTC,LastTransitionTime:2020-10-24 09:41:20 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-10-24 11:21:30 +0000 UTC,LastTransitionTime:2020-10-24 09:41:20 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-10-24 11:21:30 +0000 UTC,LastTransitionTime:2020-10-24 09:41:20 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-10-24 09:41:33 +0000 UTC,LastTransitionTime:2020-10-24 09:41:33 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-10-24 11:22:57 +0000 UTC,LastTransitionTime:2020-10-24 09:41:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-10-24 11:22:57 +0000 UTC,LastTransitionTime:2020-10-24 09:41:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-10-24 11:22:57 +0000 UTC,LastTransitionTime:2020-10-24 09:41:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-10-24 11:22:57 +0000 UTC,LastTransitionTime:2020-10-24 09:41:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.230.68.133,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-g27b.c.gce-up-c1-4-glat-up-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-g27b.c.gce-up-c1-4-glat-up-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d021b312779b9981995313f93f51f6f3,SystemUUID:d021b312-779b-9981-9953-13f93f51f6f3,BootID:fba9b7e2-a21f-48e6-a5a5-09a0a34dc7fd,KernelVersion:5.4.49+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.1,KubeletVersion:v1.20.0-alpha.3.114+5935fcd704fe89,KubeProxyVersion:v1.20.0-alpha.3.114+5935fcd704fe89,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.20.0-alpha.3.114_5935fcd704fe89],SizeBytes:140129137,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[docker.io/library/nginx@sha256:ed7f815851b5299f616220a63edac69a4cc200e7f536a56e421988da82e44ed8 docker.io/library/nginx:latest],SizeBytes:53593938,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:be8875e5584750b7a490244ee56a121a714aa3d124164a5090cd8b3570c5650f k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.8.1],SizeBytes:15208262,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:13982350,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6],SizeBytes:10542830,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:9515805,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:35745de3c9a2884d53ad0e81b39f1eed9a7c77f5f909b9e84f9712b37ffb3021 k8s.gcr.io/addon-resizer:1.8.11],SizeBytes:9347950,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Oct 24 11:25:40.912: INFO: 
Logging kubelet events for node bootstrap-e2e-minion-group-g27b
Oct 24 11:25:40.951: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-g27b
Oct 24 11:25:40.995: INFO: kube-proxy-bootstrap-e2e-minion-group-g27b started at 2020-10-24 09:41:18 +0000 UTC (0+1 container statuses recorded)
Oct 24 11:25:40.995: INFO: 	Container kube-proxy ready: true, restart count 0
... skipping 8 lines ...
Oct 24 11:25:40.995: INFO: 	Container autoscaler ready: true, restart count 0
W1024 11:25:41.040820  143945 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 24 11:25:41.209: INFO: 
Latency metrics for node bootstrap-e2e-minion-group-g27b
Oct 24 11:25:41.209: INFO: 
Logging node info for node bootstrap-e2e-minion-group-vkx8
Oct 24 11:25:41.252: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-vkx8    ebc2293d-dc4d-461b-b384-d341a51f2551 23748 0 2020-10-24 09:41:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-vkx8 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{node-problem-detector Update v1 2020-10-24 09:41:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2020-10-24 09:41:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}}} {e2e.test Update v1 2020-10-24 11:03:44 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2020-10-24 11:03:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/instance-type":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:attachable-volumes-gce-pd":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:config":{},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://gce-up-c1-4-glat-up-clu/us-west1-b/bootstrap-e2e-minion-group-vkx8,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},example.com/fakecpu: {{1 3} {<nil>} 1k DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7823917056 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {<nil>} 3 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},example.com/fakecpu: {{1 3} {<nil>} 1k DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7561773056 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {<nil>} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-10-24 11:21:30 +0000 UTC,LastTransitionTime:2020-10-24 09:41:19 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-10-24 11:21:30 +0000 UTC,LastTransitionTime:2020-10-24 09:41:19 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-10-24 11:21:30 +0000 UTC,LastTransitionTime:2020-10-24 09:41:19 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-10-24 11:21:30 +0000 UTC,LastTransitionTime:2020-10-24 09:41:19 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-10-24 11:21:30 +0000 UTC,LastTransitionTime:2020-10-24 09:41:19 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-10-24 11:21:30 +0000 UTC,LastTransitionTime:2020-10-24 09:41:19 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-10-24 11:21:30 +0000 UTC,LastTransitionTime:2020-10-24 09:41:19 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-10-24 09:41:33 +0000 UTC,LastTransitionTime:2020-10-24 09:41:33 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-10-24 11:22:56 +0000 UTC,LastTransitionTime:2020-10-24 09:41:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-10-24 11:22:56 +0000 UTC,LastTransitionTime:2020-10-24 09:41:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-10-24 11:22:56 +0000 UTC,LastTransitionTime:2020-10-24 09:41:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-10-24 11:22:56 +0000 UTC,LastTransitionTime:2020-10-24 09:41:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.233.170.5,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-vkx8.c.gce-up-c1-4-glat-up-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-vkx8.c.gce-up-c1-4-glat-up-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5b01ffb5bb092a5a54eb1118ba44ac6a,SystemUUID:5b01ffb5-bb09-2a5a-54eb-1118ba44ac6a,BootID:d44a5021-f6ec-461c-b3b5-0b88992c2ca9,KernelVersion:5.4.49+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.1,KubeletVersion:v1.20.0-alpha.3.114+5935fcd704fe89,KubeProxyVersion:v1.20.0-alpha.3.114+5935fcd704fe89,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.20.0-alpha.3.114_5935fcd704fe89],SizeBytes:140129137,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[docker.io/library/nginx@sha256:ed7f815851b5299f616220a63edac69a4cc200e7f536a56e421988da82e44ed8 docker.io/library/nginx:latest],SizeBytes:53593938,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:13982350,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b k8s.gcr.io/metrics-server-amd64:v0.3.6],SizeBytes:10542830,},ContainerImage{Names:[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0],SizeBytes:9515805,},ContainerImage{Names:[k8s.gcr.io/addon-resizer@sha256:35745de3c9a2884d53ad0e81b39f1eed9a7c77f5f909b9e84f9712b37ffb3021 k8s.gcr.io/addon-resizer:1.8.11],SizeBytes:9347950,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Oct 24 11:25:41.253: INFO: 
Logging kubelet events for node bootstrap-e2e-minion-group-vkx8
Oct 24 11:25:41.292: INFO: 
Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-vkx8
Oct 24 11:25:41.335: INFO: kube-proxy-bootstrap-e2e-minion-group-vkx8 started at 2020-10-24 09:41:17 +0000 UTC (0+1 container statuses recorded)
Oct 24 11:25:41.335: INFO: 	Container kube-proxy ready: true, restart count 0
... skipping 11 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl server-side dry-run
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:909
    should check if kubectl can dry-run update Pods [Conformance] [It]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629

    Oct 24 11:25:39.863: Unexpected error:
        <exec.CodeExitError>: {
            Err: {
                s: "error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3871 replace -f - --dry-run=server:\nCommand stdout:\n\nstderr:\nError from server (Conflict): error when replacing \"STDIN\": Operation cannot be fulfilled on pods \"e2e-test-httpd-pod\": the object has been modified; please apply your changes to the latest version and try again\n\nerror:\nexit status 1",
            },
            Code: 1,
        }
        error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3871 replace -f - --dry-run=server:
        Command stdout:
        
        stderr:
        Error from server (Conflict): error when replacing "STDIN": Operation cannot be fulfilled on pods "e2e-test-httpd-pod": the object has been modified; please apply your changes to the latest version and try again
        
        error:
        exit status 1
    occurred

    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:598
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":306,"completed":284,"skipped":4632,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] Services
... skipping 9 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:25:41.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2179" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:781
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":306,"completed":285,"skipped":4638,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 62 lines ...
• [SLOW TEST:307.716 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":306,"completed":286,"skipped":4654,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 24 11:30:50.024: INFO: Waiting up to 5m0s for pod "downwardapi-volume-20cfc757-0a3f-4734-b5c4-6d1e9f9c484d" in namespace "projected-7839" to be "Succeeded or Failed"
Oct 24 11:30:50.070: INFO: Pod "downwardapi-volume-20cfc757-0a3f-4734-b5c4-6d1e9f9c484d": Phase="Pending", Reason="", readiness=false. Elapsed: 46.138722ms
Oct 24 11:30:52.126: INFO: Pod "downwardapi-volume-20cfc757-0a3f-4734-b5c4-6d1e9f9c484d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.102567096s
STEP: Saw pod success
Oct 24 11:30:52.126: INFO: Pod "downwardapi-volume-20cfc757-0a3f-4734-b5c4-6d1e9f9c484d" satisfied condition "Succeeded or Failed"
Oct 24 11:30:52.169: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downwardapi-volume-20cfc757-0a3f-4734-b5c4-6d1e9f9c484d container client-container: <nil>
STEP: delete the pod
Oct 24 11:30:52.288: INFO: Waiting for pod downwardapi-volume-20cfc757-0a3f-4734-b5c4-6d1e9f9c484d to disappear
Oct 24 11:30:52.328: INFO: Pod downwardapi-volume-20cfc757-0a3f-4734-b5c4-6d1e9f9c484d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:30:52.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7839" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":306,"completed":287,"skipped":4668,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Oct 24 11:30:52.613: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3348 proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:30:53.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3348" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":306,"completed":288,"skipped":4673,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:30:55.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8984" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":289,"skipped":4678,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 37 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:31:01.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3198" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":306,"completed":290,"skipped":4679,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSS
------------------------------
[sig-instrumentation] Events API 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-instrumentation] Events API
... skipping 20 lines ...
STEP: listing events in all namespaces
STEP: listing events in test namespace
[AfterEach] [sig-instrumentation] Events API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:31:01.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-8811" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":306,"completed":291,"skipped":4687,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Oct 24 11:31:03.465: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:31:03.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-157" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":306,"completed":292,"skipped":4717,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-3043491c-a201-43fc-909d-b3d8e6e23f71
STEP: Creating a pod to test consume configMaps
Oct 24 11:31:03.923: INFO: Waiting up to 5m0s for pod "pod-configmaps-ea41853d-1ebe-4db0-8599-172ae138b65d" in namespace "configmap-1578" to be "Succeeded or Failed"
Oct 24 11:31:03.964: INFO: Pod "pod-configmaps-ea41853d-1ebe-4db0-8599-172ae138b65d": Phase="Pending", Reason="", readiness=false. Elapsed: 41.575716ms
Oct 24 11:31:06.005: INFO: Pod "pod-configmaps-ea41853d-1ebe-4db0-8599-172ae138b65d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.082060325s
STEP: Saw pod success
Oct 24 11:31:06.005: INFO: Pod "pod-configmaps-ea41853d-1ebe-4db0-8599-172ae138b65d" satisfied condition "Succeeded or Failed"
Oct 24 11:31:06.045: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-configmaps-ea41853d-1ebe-4db0-8599-172ae138b65d container configmap-volume-test: <nil>
STEP: delete the pod
Oct 24 11:31:06.170: INFO: Waiting for pod pod-configmaps-ea41853d-1ebe-4db0-8599-172ae138b65d to disappear
Oct 24 11:31:06.210: INFO: Pod pod-configmaps-ea41853d-1ebe-4db0-8599-172ae138b65d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:31:06.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1578" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":306,"completed":293,"skipped":4730,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:31:15.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7603" for this suite.
STEP: Destroying namespace "webhook-7603-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":306,"completed":294,"skipped":4736,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 16 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:31:30.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5824" for this suite.
STEP: Destroying namespace "nsdeletetest-5735" for this suite.
Oct 24 11:31:30.498: INFO: Namespace nsdeletetest-5735 was already deleted
STEP: Destroying namespace "nsdeletetest-9323" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":306,"completed":295,"skipped":4750,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 7 lines ...
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:31:30.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-6505" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":306,"completed":296,"skipped":4755,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Oct 24 11:31:31.094: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:31:37.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7877" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":306,"completed":297,"skipped":4756,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 39 lines ...
Oct 24 11:31:48.071: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=crd-publish-openapi-5304 explain e2e-test-crd-publish-openapi-21-crds.spec'
Oct 24 11:31:48.699: INFO: stderr: ""
Oct 24 11:31:48.700: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-21-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Oct 24 11:31:48.700: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=crd-publish-openapi-5304 explain e2e-test-crd-publish-openapi-21-crds.spec.bars'
Oct 24 11:31:49.339: INFO: stderr: ""
Oct 24 11:31:49.340: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-21-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Oct 24 11:31:49.340: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.105.36.219 --kubeconfig=/workspace/.kube/config --namespace=crd-publish-openapi-5304 explain e2e-test-crd-publish-openapi-21-crds.spec.bars2'
Oct 24 11:31:49.958: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:31:56.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5304" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":306,"completed":298,"skipped":4757,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:32:00.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2926" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":306,"completed":299,"skipped":4772,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-map-370a60c0-da3f-446a-8f92-07e369aca95a
STEP: Creating a pod to test consume configMaps
Oct 24 11:32:00.890: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9cc745a3-ffd8-4cd2-b4ba-04346706b135" in namespace "projected-4267" to be "Succeeded or Failed"
Oct 24 11:32:01.137: INFO: Pod "pod-projected-configmaps-9cc745a3-ffd8-4cd2-b4ba-04346706b135": Phase="Pending", Reason="", readiness=false. Elapsed: 246.806889ms
Oct 24 11:32:03.180: INFO: Pod "pod-projected-configmaps-9cc745a3-ffd8-4cd2-b4ba-04346706b135": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.290601566s
STEP: Saw pod success
Oct 24 11:32:03.181: INFO: Pod "pod-projected-configmaps-9cc745a3-ffd8-4cd2-b4ba-04346706b135" satisfied condition "Succeeded or Failed"
Oct 24 11:32:03.224: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-projected-configmaps-9cc745a3-ffd8-4cd2-b4ba-04346706b135 container agnhost-container: <nil>
STEP: delete the pod
Oct 24 11:32:03.365: INFO: Waiting for pod pod-projected-configmaps-9cc745a3-ffd8-4cd2-b4ba-04346706b135 to disappear
Oct 24 11:32:03.409: INFO: Pod pod-projected-configmaps-9cc745a3-ffd8-4cd2-b4ba-04346706b135 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:32:03.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4267" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":300,"skipped":4776,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Oct 24 11:32:03.503: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on node default medium
Oct 24 11:32:04.087: INFO: Waiting up to 5m0s for pod "pod-c2df0379-4b45-4abd-9ba2-533243689a75" in namespace "emptydir-6536" to be "Succeeded or Failed"
Oct 24 11:32:04.126: INFO: Pod "pod-c2df0379-4b45-4abd-9ba2-533243689a75": Phase="Pending", Reason="", readiness=false. Elapsed: 39.504324ms
Oct 24 11:32:06.213: INFO: Pod "pod-c2df0379-4b45-4abd-9ba2-533243689a75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.126374582s
STEP: Saw pod success
Oct 24 11:32:06.213: INFO: Pod "pod-c2df0379-4b45-4abd-9ba2-533243689a75" satisfied condition "Succeeded or Failed"
Oct 24 11:32:06.274: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod pod-c2df0379-4b45-4abd-9ba2-533243689a75 container test-container: <nil>
STEP: delete the pod
Oct 24 11:32:06.454: INFO: Waiting for pod pod-c2df0379-4b45-4abd-9ba2-533243689a75 to disappear
Oct 24 11:32:06.512: INFO: Pod pod-c2df0379-4b45-4abd-9ba2-533243689a75 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:32:06.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6536" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":301,"skipped":4806,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Oct 24 11:32:07.009: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ac7194a-9f2b-496d-b8f4-b064abf5014f" in namespace "downward-api-430" to be "Succeeded or Failed"
Oct 24 11:32:07.101: INFO: Pod "downwardapi-volume-6ac7194a-9f2b-496d-b8f4-b064abf5014f": Phase="Pending", Reason="", readiness=false. Elapsed: 91.38904ms
Oct 24 11:32:09.149: INFO: Pod "downwardapi-volume-6ac7194a-9f2b-496d-b8f4-b064abf5014f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.139656997s
STEP: Saw pod success
Oct 24 11:32:09.149: INFO: Pod "downwardapi-volume-6ac7194a-9f2b-496d-b8f4-b064abf5014f" satisfied condition "Succeeded or Failed"
Oct 24 11:32:09.193: INFO: Trying to get logs from node bootstrap-e2e-minion-group-vkx8 pod downwardapi-volume-6ac7194a-9f2b-496d-b8f4-b064abf5014f container client-container: <nil>
STEP: delete the pod
Oct 24 11:32:09.377: INFO: Waiting for pod downwardapi-volume-6ac7194a-9f2b-496d-b8f4-b064abf5014f to disappear
Oct 24 11:32:09.426: INFO: Pod downwardapi-volume-6ac7194a-9f2b-496d-b8f4-b064abf5014f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:32:09.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-430" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":306,"completed":302,"skipped":4823,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 27 lines ...
Oct 24 11:32:28.400: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:32:28.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9103" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":306,"completed":303,"skipped":4861,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:32:35.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3000" for this suite.
STEP: Destroying namespace "webhook-3000-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":306,"completed":304,"skipped":4907,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] 
  validates lower priority pod preemption by critical pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 17 lines ...
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Oct 24 11:33:51.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-5974" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":306,"completed":305,"skipped":4918,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}
SSSSSOct 24 11:33:51.857: INFO: Running AfterSuite actions on all nodes
Oct 24 11:33:51.870: INFO: Running AfterSuite actions on node 1
Oct 24 11:33:51.870: INFO: Skipping dumping logs from cluster

JUnit report was created: /logs/artifacts/after/junit_01.xml
{"msg":"Test Suite completed","total":306,"completed":305,"skipped":4923,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-cli] Kubectl client Kubectl server-side dry-run [It] should check if kubectl can dry-run update Pods [Conformance] 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:598

Ran 306 of 5229 Specs in 6596.883 seconds
FAIL! -- 305 Passed | 1 Failed | 0 Pending | 4923 Skipped
--- FAIL: TestE2E (6596.94s)
FAIL

Ginkgo ran 1 suite in 1h49m58.268877128s
Test Suite Failed
2020/10/24 11:33:51 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.focus=\[Conformance\] --report-dir=/logs/artifacts/after --disable-log-dump=true' finished in 1h49m59.5051458s
2020/10/24 11:33:51 e2e.go:544: Dumping logs locally to: /logs/artifacts/after
2020/10/24 11:33:51 process.go:153: Running: ./cluster/log-dump/log-dump.sh /logs/artifacts/after
Checking for custom logdump instances, if any
Sourcing kube-util.sh
Detecting project
... skipping 2 lines ...
Zone: us-west1-b
Dumping logs from master locally to '/logs/artifacts/after'
Trying to find master named 'bootstrap-e2e-master'
Looking for address 'bootstrap-e2e-master-ip'
Using master: bootstrap-e2e-master (external IP: 34.105.36.219; internal IP: (not set))
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Changing logfiles to be world-readable for download
Copying 'kube-apiserver.log kube-apiserver-audit.log kube-scheduler.log kube-controller-manager.log etcd.log etcd-events.log glbc.log cluster-autoscaler.log kube-addon-manager.log konnectivity-server.log fluentd.log kubelet.cov startupscript.log' from bootstrap-e2e-master

Specify --start=56972 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/konnectivity-server.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts/after'
Detecting nodes in the cluster
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Changing logfiles to be world-readable for download
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Changing logfiles to be world-readable for download
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Error: No such container: 
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-vkx8
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-g27b

Specify --start=107094 in the next get-serial-port-output invocation to get only the new output starting from here.
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-bf58

Specify --start=82240 in the next get-serial-port-output invocation to get only the new output starting from here.

Specify --start=76452 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=bootstrap-e2e-minion-group
NODE_NAMES=bootstrap-e2e-minion-group-bf58 bootstrap-e2e-minion-group-g27b bootstrap-e2e-minion-group-vkx8
Failures for bootstrap-e2e-minion-group (if any):
2020/10/24 11:36:17 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts/after' finished in 2m25.990324762s
2020/10/24 11:36:17 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
Project: gce-up-c1-4-glat-up-clu
... skipping 40 lines ...
Property "users.gce-up-c1-4-glat-up-clu_bootstrap-e2e-basic-auth" unset.
Property "contexts.gce-up-c1-4-glat-up-clu_bootstrap-e2e" unset.
Cleared config for gce-up-c1-4-glat-up-clu_bootstrap-e2e from /workspace/.kube/config
Done
2020/10/24 11:42:32 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 6m14.642854452s
2020/10/24 11:42:32 process.go:96: Saved XML output to /logs/artifacts/after/junit_runner.xml.
2020/10/24 11:42:32 main.go:316: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Conformance\] --report-dir=/logs/artifacts/after --disable-log-dump=true: exit status 1]
Traceback (most recent call last):
  File "../test-infra/scenarios/kubernetes_e2e.py", line 720, in <module>
    main(parse_args())
  File "../test-infra/scenarios/kubernetes_e2e.py", line 570, in main
    mode.start(runner_args)
  File "../test-infra/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 16 lines ...