This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-01-16 06:15
Elapsed1h8m
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/1b1ad4c9-6077-4049-b70a-8ff16fd598e5/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/1b1ad4c9-6077-4049-b70a-8ff16fd598e5/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 455 lines ...
Project: k8s-jkns-e2e-gke-ubuntu-slow
Network Project: k8s-jkns-e2e-gke-ubuntu-slow
Zone: us-west1-b
INSTANCE_GROUPS=
NODE_NAMES=
Bringing down cluster
ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

Deleting firewall rules remaining in network bootstrap-e2e: 
W0116 06:43:03.441479  107266 loader.go:223] Config not found: /workspace/.kube/config
... skipping 144 lines ...
Trying to find master named 'bootstrap-e2e-master'
Looking for address 'bootstrap-e2e-master-ip'
Using master: bootstrap-e2e-master (external IP: 34.83.159.163; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

.................Kubernetes cluster created.
Cluster "k8s-jkns-e2e-gke-ubuntu-slow_bootstrap-e2e" set.
User "k8s-jkns-e2e-gke-ubuntu-slow_bootstrap-e2e" set.
Context "k8s-jkns-e2e-gke-ubuntu-slow_bootstrap-e2e" created.
Switched to context "k8s-jkns-e2e-gke-ubuntu-slow_bootstrap-e2e".
... skipping 27 lines ...
bootstrap-e2e-master              Ready,SchedulingDisabled   <none>   19s   v1.18.0-alpha.1.810+f437ff75d45517
bootstrap-e2e-minion-group-5wcz   Ready                      <none>   21s   v1.18.0-alpha.1.810+f437ff75d45517
bootstrap-e2e-minion-group-9dh8   Ready                      <none>   21s   v1.18.0-alpha.1.810+f437ff75d45517
bootstrap-e2e-minion-group-mnwl   Ready                      <none>   21s   v1.18.0-alpha.1.810+f437ff75d45517
bootstrap-e2e-minion-group-n0jl   Ready                      <none>   20s   v1.18.0-alpha.1.810+f437ff75d45517
Validate output:
NAME                 STATUS    MESSAGE             ERROR
etcd-0               Healthy   {"health":"true"}   
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
Cluster validation succeeded
Done, listing cluster services:
... skipping 77 lines ...
Copying 'kube-apiserver.log kube-apiserver-audit.log kube-scheduler.log kube-controller-manager.log etcd.log etcd-events.log glbc.log cluster-autoscaler.log kube-addon-manager.log fluentd.log kubelet.cov startupscript.log' from bootstrap-e2e-master

Specify --start=46919 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts/before'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
... skipping 9 lines ...
scp: /var/log/node-problem-detector.log*: No such file or directory

Specify --start=47851 in the next get-serial-port-output invocation to get only the new output starting from here.

Specify --start=48793 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=bootstrap-e2e-minion-group
NODE_NAMES=bootstrap-e2e-minion-group-5wcz bootstrap-e2e-minion-group-9dh8 bootstrap-e2e-minion-group-mnwl bootstrap-e2e-minion-group-n0jl
Failures for bootstrap-e2e-minion-group (if any):
2020/01/16 06:50:31 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts/before' finished in 2m20.055508441s
2020/01/16 06:50:31 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh
Project: k8s-jkns-e2e-gke-ubuntu-slow
... skipping 72 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Only supported for providers [vsphere] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1383
------------------------------
... skipping 1316 lines ...
Jan 16 06:50:53.576: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename configmap
Jan 16 06:50:55.945: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled.
Jan 16 06:50:57.033: INFO: Found ClusterRoles; assuming RBAC is enabled.
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-277
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap that has name configmap-test-emptyKey-635550b8-d20a-475e-843a-8911ca2f643a
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:50:57.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-277" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:50:58.409: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 113 lines ...
• [SLOW TEST:6.917 seconds]
[sig-storage] HostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should support r/w [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:65
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:00.408: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:51:00.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 42 lines ...
• [SLOW TEST:6.793 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should prevent NodePort collisions
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1752
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":1,"skipped":8,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 31 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1731
    should create a deployment from an image  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:01.318: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:51:01.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 43 lines ...
• [SLOW TEST:6.793 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:05.593: INFO: Driver nfs doesn't support ext3 -- skipping
... skipping 39 lines ...
• [SLOW TEST:12.365 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:12.420 seconds]
[sig-storage] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 21 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when running a container with a new image
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:263
      should be able to pull image [NodeConformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:374
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:09.147: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:51:09.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 68 lines ...
• [SLOW TEST:15.262 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 28 lines ...
• [SLOW TEST:16.170 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:09.735: INFO: Only supported for providers [openstack] (not gce)
... skipping 43 lines ...
• [SLOW TEST:18.009 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:11.596: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:51:11.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 129 lines ...
• [SLOW TEST:18.889 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod resolv.conf
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:455
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:12.440: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:51:12.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 138 lines ...
• [SLOW TEST:7.994 seconds]
[sig-api-machinery] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:13.603: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:51:13.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 41 lines ...
• [SLOW TEST:20.145 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:13.662: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:51:13.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 173 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:51:15.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-341" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget","total":-1,"completed":3,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:16.616 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: maxUnavailable deny evictions, integer => should not allow an eviction
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:151
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer =\u003e should not allow an eviction","total":-1,"completed":2,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 58 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and read from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:14.388 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:51:25.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6552" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":3,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:32.122 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:25.691: INFO: Driver gluster doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:51:25.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 172 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : projected
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:37.680: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:51:37.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 86 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:37.993: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:51:37.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 169 lines ...
• [SLOW TEST:14.752 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:38.107: INFO: Driver cinder doesn't support ext4 -- skipping
... skipping 504 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume one after the other
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":2,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:41.192: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 97 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:42.558: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:51:42.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 66 lines ...
• [SLOW TEST:10.270 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:48.335: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:51:48.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 419 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:525
    should support exec
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:537
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":1,"skipped":3,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 146 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:525
    should return command exit codes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:645
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes","total":-1,"completed":2,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:54.557: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 63 lines ...
      Only supported for providers [openstack] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1080
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":8,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:51:39.483: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-435
... skipping 20 lines ...
• [SLOW TEST:16.891 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}
[BeforeEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:51:48.346: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8431
... skipping 21 lines ...
• [SLOW TEST:8.034 seconds]
[sig-node] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:56.383: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:51:56.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 217 lines ...
• [SLOW TEST:31.846 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:57.562: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 106 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:7.600 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:51:58.087: INFO: Driver nfs doesn't support ntfs -- skipping
... skipping 85 lines ...
      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:688
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":23,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:51:51.523: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-2894
... skipping 20 lines ...
• [SLOW TEST:7.870 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":23,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 21 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:290
    should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:329
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:04.193: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:52:04.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 61 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":55,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:06.186: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 49 lines ...
• [SLOW TEST:12.071 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:06.674: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:52:06.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 66 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:07.350: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:52:07.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 60 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:51:56.376: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-6711
... skipping 23 lines ...
• [SLOW TEST:11.634 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:87
Jan 16 06:52:08.012: INFO: Driver "csi-hostpath" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
... skipping 28 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:52:07.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-1242" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler","total":-1,"completed":4,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:08.149: INFO: Driver nfs doesn't support ext3 -- skipping
... skipping 198 lines ...
• [SLOW TEST:45.341 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a custom resource.
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:559
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.","total":-1,"completed":3,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:13.296: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 141 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 148 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: nfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver "nfs" does not support topology - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:96
------------------------------
... skipping 134 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:121
    when invoking the Recycle reclaim policy
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:264
      should test that a PV becomes Available and is clean after the PVC is deleted.
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:282
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","total":-1,"completed":1,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 64 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and read from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:17.678: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 39 lines ...
• [SLOW TEST:19.957 seconds]
[sig-apps] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 4 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:52:18.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":3,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:18.604: INFO: Driver vsphere doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:52:18.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 47 lines ...
• [SLOW TEST:11.472 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":14,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:18.888: INFO: Driver nfs doesn't support ntfs -- skipping
... skipping 43 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:290
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:361
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:14.345 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, replicaSet, percentage => should allow an eviction
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:151
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":4,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:22.375: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:52:22.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 91 lines ...
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-8168 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
Jan 16 06:52:02.192: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config exec --namespace=services-8168 execpod-kxm47 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-8168.svc.cluster.local:80/; test "$?" -ne "0"'
Jan 16 06:52:04.093: INFO: rc: 1
Jan 16 06:52:04.093: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config exec --namespace=services-8168 execpod-kxm47 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-8168.svc.cluster.local:80/; test "$?" -ne "0":
Command stdout:
NOW: 2020-01-16 06:52:03.945560672 +0000 UTC m=+20.953082128
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-8168.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1

error:
exit status 1
Jan 16 06:52:06.094: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config exec --namespace=services-8168 execpod-kxm47 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-8168.svc.cluster.local:80/; test "$?" -ne "0"'
Jan 16 06:52:08.054: INFO: rc: 1
Jan 16 06:52:08.054: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config exec --namespace=services-8168 execpod-kxm47 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-8168.svc.cluster.local:80/; test "$?" -ne "0":
Command stdout:
NOW: 2020-01-16 06:52:07.935322919 +0000 UTC m=+24.942844380
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-8168.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1

error:
exit status 1
Jan 16 06:52:08.094: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config exec --namespace=services-8168 execpod-kxm47 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-8168.svc.cluster.local:80/; test "$?" -ne "0"'
Jan 16 06:52:12.562: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-8168.svc.cluster.local:80/\n+ test 7 -ne 0\n"
Jan 16 06:52:12.562: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
Jan 16 06:52:14.186: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config exec --namespace=services-8168 execpod-kxm47 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-8168.svc.cluster.local:80/'
Jan 16 06:52:17.638: INFO: rc: 7
Jan 16 06:52:17.638: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config exec --namespace=services-8168 execpod-kxm47 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-8168.svc.cluster.local:80/:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-8168.svc.cluster.local:80/
command terminated with exit code 7

error:
exit status 7
Jan 16 06:52:19.659: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config exec --namespace=services-8168 execpod-kxm47 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-8168.svc.cluster.local:80/'
Jan 16 06:52:23.761: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-8168.svc.cluster.local:80/\n"
Jan 16 06:52:23.762: INFO: stdout: "NOW: 2020-01-16 06:52:23.43296121 +0000 UTC m=+40.440482666"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-8168
... skipping 69 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:525
    should support port-forward
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:752
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":4,"skipped":62,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:51:11.978: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nettest-2604
... skipping 26 lines ...
STEP: Creating the service on top of the pods in kubernetes
Jan 16 06:51:41.938: INFO: Service node-port-service in namespace nettest-2604 found.
Jan 16 06:51:42.158: INFO: Service session-affinity-service in namespace nettest-2604 found.
STEP: dialing(udp) test-container-pod --> 10.0.8.93:90
Jan 16 06:51:42.290: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.3.24:8080/dial?request=hostName&protocol=udp&host=10.0.8.93&port=90&tries=1'] Namespace:nettest-2604 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:51:42.290: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:51:48.439: INFO: Tries: 10, in try: 0, stdout: {"errors":["reading from udp connection failed. err:'read udp 10.64.3.24:41660-\u003e10.0.8.93:90: i/o timeout'"]}, stderr: , command run in: (*v1.Pod)(nil)
Jan 16 06:51:50.690: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.3.24:8080/dial?request=hostName&protocol=udp&host=10.0.8.93&port=90&tries=1'] Namespace:nettest-2604 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:51:50.690: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:51:52.030: INFO: Tries: 10, in try: 1, stdout: {"responses":["netserver-2"]}, stderr: , command run in: (*v1.Pod)(nil)
Jan 16 06:51:54.546: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.3.24:8080/dial?request=hostName&protocol=udp&host=10.0.8.93&port=90&tries=1'] Namespace:nettest-2604 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:51:54.546: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:51:55.578: INFO: Tries: 10, in try: 2, stdout: {"responses":["netserver-2"]}, stderr: , command run in: (*v1.Pod)(nil)
... skipping 29 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should function for client IP based session affinity: udp [LinuxOnly]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:282
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]","total":-1,"completed":3,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:26.516: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 54 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver emptydir doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 136 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and write from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":4,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 134 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    creating/deleting custom resource definition objects works  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":5,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:33.966: INFO: Distro gci doesn't support ntfs -- skipping
... skipping 85 lines ...
      Only supported for providers [vsphere] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1383
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":3,"skipped":45,"failed":0}
[BeforeEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:52:25.470: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-6225
... skipping 20 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":45,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 42 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:36.786: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:52:36.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 95 lines ...
Jan 16 06:52:18.318: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config exec gcepd-client --namespace=volume-8302 -- grep  /opt/0  /proc/mounts'
Jan 16 06:52:20.988: INFO: stderr: ""
Jan 16 06:52:20.988: INFO: stdout: "/dev/sdb /opt/0 ext3 rw,relatime 0 0\n"
STEP: cleaning the environment after gcepd
Jan 16 06:52:20.988: INFO: Deleting pod "gcepd-client" in namespace "volume-8302"
Jan 16 06:52:21.374: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
Jan 16 06:52:28.546: INFO: error deleting PD "bootstrap-e2e-4866c894-e7a1-4364-97c3-896ff5ff2699": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-4866c894-e7a1-4364-97c3-896ff5ff2699' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-mnwl', resourceInUseByAnotherResource
Jan 16 06:52:28.547: INFO: Couldn't delete PD "bootstrap-e2e-4866c894-e7a1-4364-97c3-896ff5ff2699", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-4866c894-e7a1-4364-97c3-896ff5ff2699' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-mnwl', resourceInUseByAnotherResource
Jan 16 06:52:36.426: INFO: Successfully deleted PD "bootstrap-e2e-4866c894-e7a1-4364-97c3-896ff5ff2699".
Jan 16 06:52:36.426: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:52:36.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8302" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (ext3)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext3)] volumes should store data","total":-1,"completed":1,"skipped":1,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":18,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:51:30.378: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-1515
... skipping 72 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:37.964: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:52:37.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 111 lines ...
STEP: Deleting pod hostexec-bootstrap-e2e-minion-group-mnwl-2xf77 in namespace volumemode-6961
Jan 16 06:52:26.806: INFO: Deleting pod "security-context-84f4acd4-239b-4238-946a-9e44c3a85fce" in namespace "volumemode-6961"
Jan 16 06:52:26.906: INFO: Wait up to 5m0s for pod "security-context-84f4acd4-239b-4238-946a-9e44c3a85fce" to be fully deleted
STEP: Deleting pv and pvc
Jan 16 06:52:33.376: INFO: Deleting PersistentVolumeClaim "pvc-4r8px"
Jan 16 06:52:33.818: INFO: Deleting PersistentVolume "gcepd-gs8ls"
Jan 16 06:52:35.455: INFO: error deleting PD "bootstrap-e2e-16e8482f-6fc5-42ad-8275-7855fdcb16ee": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-16e8482f-6fc5-42ad-8275-7855fdcb16ee' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-mnwl', resourceInUseByAnotherResource
Jan 16 06:52:35.455: INFO: Couldn't delete PD "bootstrap-e2e-16e8482f-6fc5-42ad-8275-7855fdcb16ee", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-16e8482f-6fc5-42ad-8275-7855fdcb16ee' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-mnwl', resourceInUseByAnotherResource
Jan 16 06:52:42.309: INFO: Successfully deleted PD "bootstrap-e2e-16e8482f-6fc5-42ad-8275-7855fdcb16ee".
Jan 16 06:52:42.309: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:52:42.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumemode-6961" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":2,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 84 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox Pod with hostAliases
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 50 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:47.963: INFO: Driver vsphere doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:52:47.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 162 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on localhost
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:466
    should support forwarding over websockets
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:482
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":5,"skipped":47,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:50.555: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 153 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":32,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:50.638: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 61 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377

      Driver emptydir doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:52:14.642: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 63 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":9,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:55.211: INFO: Only supported for providers [openstack] (not gce)
... skipping 46 lines ...
• [SLOW TEST:13.365 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":47,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:57.065: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 43 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : secret
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:93
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
... skipping 61 lines ...
Jan 16 06:51:53.554: INFO: PersistentVolumeClaim csi-hostpathchp2m found but phase is Pending instead of Bound.
Jan 16 06:51:55.833: INFO: PersistentVolumeClaim csi-hostpathchp2m found but phase is Pending instead of Bound.
Jan 16 06:51:57.950: INFO: PersistentVolumeClaim csi-hostpathchp2m found but phase is Pending instead of Bound.
Jan 16 06:52:00.098: INFO: PersistentVolumeClaim csi-hostpathchp2m found and phase=Bound (28.686170461s)
STEP: Expanding non-expandable pvc
Jan 16 06:52:00.316: INFO: currentPvcSize {{1048576 0} {<nil>} 1Mi BinarySI}, newSize {{1074790400 0} {<nil>}  BinarySI}
Jan 16 06:52:00.662: INFO: Error updating pvc csi-hostpathchp2m: persistentvolumeclaims "csi-hostpathchp2m" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 06:52:03.154: INFO: Error updating pvc csi-hostpathchp2m: persistentvolumeclaims "csi-hostpathchp2m" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 06:52:04.886: INFO: Error updating pvc csi-hostpathchp2m: persistentvolumeclaims "csi-hostpathchp2m" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 06:52:07.271: INFO: Error updating pvc csi-hostpathchp2m: persistentvolumeclaims "csi-hostpathchp2m" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 06:52:08.990: INFO: Error updating pvc csi-hostpathchp2m: persistentvolumeclaims "csi-hostpathchp2m" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 06:52:11.130: INFO: Error updating pvc csi-hostpathchp2m: persistentvolumeclaims "csi-hostpathchp2m" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 06:52:13.746: INFO: Error updating pvc csi-hostpathchp2m: persistentvolumeclaims "csi-hostpathchp2m" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 06:52:15.165: INFO: Error updating pvc csi-hostpathchp2m: persistentvolumeclaims "csi-hostpathchp2m" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 06:52:17.554: INFO: Error updating pvc csi-hostpathchp2m: persistentvolumeclaims "csi-hostpathchp2m" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 06:52:19.482: INFO: Error updating pvc csi-hostpathchp2m: persistentvolumeclaims "csi-hostpathchp2m" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 06:52:21.310: INFO: Error updating pvc csi-hostpathchp2m: persistentvolumeclaims "csi-hostpathchp2m" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 06:52:23.050: INFO: Error updating pvc csi-hostpathchp2m: persistentvolumeclaims "csi-hostpathchp2m" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 06:52:24.872: INFO: Error updating pvc csi-hostpathchp2m: persistentvolumeclaims "csi-hostpathchp2m" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 06:52:26.862: INFO: Error updating pvc csi-hostpathchp2m: persistentvolumeclaims "csi-hostpathchp2m" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 06:52:29.439: INFO: Error updating pvc csi-hostpathchp2m: persistentvolumeclaims "csi-hostpathchp2m" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 06:52:31.415: INFO: Error updating pvc csi-hostpathchp2m: persistentvolumeclaims "csi-hostpathchp2m" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 16 06:52:32.230: INFO: Error updating pvc csi-hostpathchp2m: persistentvolumeclaims "csi-hostpathchp2m" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jan 16 06:52:32.230: INFO: Deleting PersistentVolumeClaim "csi-hostpathchp2m"
Jan 16 06:52:32.962: INFO: Waiting up to 5m0s for PersistentVolume pvc-228b480d-d66e-4f7b-9163-840297ebb8f8 to get deleted
Jan 16 06:52:33.373: INFO: PersistentVolume pvc-228b480d-d66e-4f7b-9163-840297ebb8f8 found and phase=Bound (410.842821ms)
Jan 16 06:52:38.798: INFO: PersistentVolume pvc-228b480d-d66e-4f7b-9163-840297ebb8f8 found and phase=Bound (5.836179027s)
Jan 16 06:52:44.255: INFO: PersistentVolume pvc-228b480d-d66e-4f7b-9163-840297ebb8f8 was removed
... skipping 48 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:147
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":3,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 34 lines ...
• [SLOW TEST:37.718 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":5,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:52:59.688: INFO: Only supported for providers [aws] (not gce)
... skipping 35 lines ...
Jan 16 06:52:55.121: INFO: Got stdout from 34.83.8.14:22: Hello from prow@bootstrap-e2e-minion-group-n0jl
STEP: SSH'ing to 1 nodes and running echo "foo" | grep "bar"
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
Jan 16 06:52:56.359: INFO: Got stdout from 104.198.8.95:22: stdout
Jan 16 06:52:56.359: INFO: Got stderr from 104.198.8.95:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing prow@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [k8s.io] [sig-node] SSH
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:53:01.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-6862" for this suite.


• [SLOW TEST:11.197 seconds]
[k8s.io] [sig-node] SSH
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should SSH to all nodes and run commands
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] SSH should SSH to all nodes and run commands","total":-1,"completed":6,"skipped":58,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:01.846: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 98 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver hostPath doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 76 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:04.860: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:53:04.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 171 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":4,"skipped":6,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:52:58.067: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-2935
... skipping 18 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsNonRoot
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:98
    should run with an image specified user ID
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:145
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":3,"skipped":3,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:05.961: INFO: Only supported for providers [vsphere] (not gce)
... skipping 15 lines ...
      Only supported for providers [vsphere] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1383
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-network] Network should set TCP CLOSE_WAIT timeout","total":-1,"completed":5,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:53:05.371: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 84 lines ...
• [SLOW TEST:18.450 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment reaping should cascade to its replica sets and pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":5,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:09.127: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:53:09.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 111 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:10.185: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:53:10.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 18 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193

      Driver hostPathSymlink doesn't support ext3 -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:51:57.115: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-2266
... skipping 53 lines ...
• [SLOW TEST:73.248 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should implement service.kubernetes.io/headless
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2494
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":2,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:10.380: INFO: Driver local doesn't support ntfs -- skipping
... skipping 49 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1156
    should check if kubectl describe prints relevant information for cronjob
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1260
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":4,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:12.668: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 74 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":1,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:13.459: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:53:13.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 188 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 35 lines ...
• [SLOW TEST:19.354 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":4,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:14.579: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:53:14.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 127 lines ...
• [SLOW TEST:15.492 seconds]
[k8s.io] [sig-node] Security Context
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support container.SecurityContext.RunAsUser [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:102
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":6,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:20.395: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 15 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":-1,"completed":4,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:52:31.170: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 56 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:22.066: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:53:22.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 71 lines ...
• [SLOW TEST:13.113 seconds]
[sig-storage] Projected secret
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:89
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":6,"skipped":44,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Pod Disks
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:53:20.400: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename pod-disks
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-disks-6912
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Pod Disks
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74
[It] should be able to delete a non-existent PD without error
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:447
STEP: delete a PD
W0116 06:53:22.842986  117569 gce_disks.go:972] GCE persistent disk "non-exist" not found in managed zones (us-west1-b)
Jan 16 06:53:22.843: INFO: Successfully deleted PD "non-exist".
[AfterEach] [sig-storage] Pod Disks
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:53:22.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-disks-6912" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Pod Disks should be able to delete a non-existent PD without error","total":-1,"completed":7,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:23.094: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 98 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":3,"skipped":29,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 53 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":3,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:24.772: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 167 lines ...
• [SLOW TEST:15.125 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:25.521: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 184 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":2,"skipped":19,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 113 lines ...
STEP: creating an object not containing a namespace with in-cluster config
Jan 16 06:53:18.526: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-8627 httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
Jan 16 06:53:21.465: INFO: rc: 255
STEP: trying to use kubectl with invalid token
Jan 16 06:53:21.465: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-8627 httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
Jan 16 06:53:23.848: INFO: rc: 255
Jan 16 06:53:23.848: INFO: got err error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-8627 httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1:
Command stdout:
I0116 06:53:23.294948     177 merged_client_builder.go:164] Using in-cluster namespace
I0116 06:53:23.295273     177 merged_client_builder.go:122] Using in-cluster configuration
I0116 06:53:23.301632     177 merged_client_builder.go:122] Using in-cluster configuration
I0116 06:53:23.328177     177 merged_client_builder.go:122] Using in-cluster configuration
I0116 06:53:23.328739     177 round_trippers.go:420] GET https://10.0.0.1:443/api/v1/namespaces/kubectl-8627/pods?limit=500
... skipping 8 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
F0116 06:53:23.484414     177 helpers.go:114] error: You must be logged in to the server (Unauthorized)

stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid server
Jan 16 06:53:23.848: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-8627 httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
Jan 16 06:53:26.120: INFO: rc: 255
Jan 16 06:53:26.121: INFO: got err error running /home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-8627 httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1:
Command stdout:
I0116 06:53:25.757951     188 merged_client_builder.go:164] Using in-cluster namespace
I0116 06:53:25.774631     188 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 16 milliseconds
I0116 06:53:25.774694     188 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0116 06:53:25.824148     188 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 49 milliseconds
I0116 06:53:25.824238     188 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0116 06:53:25.824288     188 shortcut.go:89] Error loading discovery information: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0116 06:53:25.865705     188 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 41 milliseconds
I0116 06:53:25.865764     188 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0116 06:53:25.872541     188 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 6 milliseconds
I0116 06:53:25.872599     188 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0116 06:53:25.876971     188 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 4 milliseconds
I0116 06:53:25.877025     188 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
I0116 06:53:25.877054     188 helpers.go:221] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.0.0.10:53: no such host
F0116 06:53:25.877077     188 helpers.go:114] Unable to connect to the server: dial tcp: lookup invalid on 10.0.0.10:53: no such host

stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid namespace
Jan 16 06:53:26.121: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config exec --namespace=kubectl-8627 httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
Jan 16 06:53:28.778: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
Jan 16 06:53:28.778: INFO: stdout: "I0116 06:53:28.189546     199 merged_client_builder.go:122] Using in-cluster configuration\nI0116 06:53:28.196973     199 merged_client_builder.go:122] Using in-cluster configuration\nI0116 06:53:28.221880     199 merged_client_builder.go:122] Using in-cluster configuration\nI0116 06:53:28.499728     199 round_trippers.go:443] GET https://10.0.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 277 milliseconds\nNo resources found in invalid namespace.\n"
Jan 16 06:53:28.778: INFO: stdout: I0116 06:53:28.189546     199 merged_client_builder.go:122] Using in-cluster configuration
... skipping 78 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:525
    should handle in-cluster config
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:769
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":6,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:34.689: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 226 lines ...
• [SLOW TEST:11.732 seconds]
[sig-scheduling] LimitRange
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  should create a LimitRange with defaults and ensure pod has those defaults applied.
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/limit_range.go:55
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.","total":-1,"completed":8,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:34.844: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:53:34.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 44 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152

      Driver "local" does not provide raw block - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:101
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":3,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:52:43.264: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 70 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:37.753: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:53:37.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 32 lines ...
Jan 16 06:53:27.161: INFO: Waiting for PV local-pvd49m7 to bind to PVC pvc-8drtc
Jan 16 06:53:27.161: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-8drtc] to have phase Bound
Jan 16 06:53:27.346: INFO: PersistentVolumeClaim pvc-8drtc found but phase is Pending instead of Bound.
Jan 16 06:53:29.470: INFO: PersistentVolumeClaim pvc-8drtc found and phase=Bound (2.309095992s)
Jan 16 06:53:29.470: INFO: Waiting up to 3m0s for PersistentVolume local-pvd49m7 to have phase Bound
Jan 16 06:53:29.626: INFO: PersistentVolume local-pvd49m7 found and phase=Bound (156.187888ms)
[It] should fail scheduling due to different NodeSelector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:364
STEP: local-volume-type: dir
STEP: Initializing test volumes
Jan 16 06:53:30.850: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d46e9739-89c9-482e-bf3a-1bdef35121fd] Namespace:persistent-local-volumes-test-366 PodName:hostexec-bootstrap-e2e-minion-group-5wcz-xvcln ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 16 06:53:30.850: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Creating local PVCs and PVs
... skipping 23 lines ...

• [SLOW TEST:28.727 seconds]
[sig-storage] PersistentVolumes-local 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:338
    should fail scheduling due to different NodeSelector
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:364
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":3,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 37 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:466
    that expects a client request
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:467
      should support a client that connects, sends NO DATA, and disconnects
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":6,"skipped":45,"failed":0}

SSSS
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 23 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:44.339: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 180 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:47.251: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 49 lines ...
• [SLOW TEST:10.196 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":8,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 130 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should function for pod-Service: udp
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:172
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: udp","total":-1,"completed":2,"skipped":8,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":38,"failed":0}
[BeforeEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:53:46.268: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-5278
... skipping 17 lines ...
• [SLOW TEST:5.900 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should check NodePort out-of-range
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1806
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":8,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:52.170: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:53:52.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 65 lines ...
Jan 16 06:52:57.071: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-6738
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 16 06:52:59.286: INFO: PodSpec: initContainers in spec.initContainers
Jan 16 06:53:53.648: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a4cec9c9-d876-4d10-845f-cd02e83c496c", GenerateName:"", Namespace:"init-container-6738", SelfLink:"/api/v1/namespaces/init-container-6738/pods/pod-init-a4cec9c9-d876-4d10-845f-cd02e83c496c", UID:"30903a61-c5c5-4d51-82b9-c6c5dd15d7f1", ResourceVersion:"6563", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714754379, loc:(*time.Location)(0x7bb7ec0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"286825774"}, Annotations:map[string]string{"kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nrbkk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002a19280), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nrbkk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nrbkk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nrbkk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0036f50c0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"bootstrap-e2e-minion-group-9dh8", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00364b200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0036f5140)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0036f5160)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0036f5168), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0036f516c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714754379, loc:(*time.Location)(0x7bb7ec0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714754379, loc:(*time.Location)(0x7bb7ec0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714754379, loc:(*time.Location)(0x7bb7ec0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714754379, loc:(*time.Location)(0x7bb7ec0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.5", PodIP:"10.64.0.49", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.64.0.49"}}, StartTime:(*v1.Time)(0xc00216c740), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000704770)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000704850)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://e29948d79e6faae253b3dd90ae45a1fc99d9f8866d0b243f67687e3b2ae6414e", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00216c780), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00216c760), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0036f51ef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:53:53.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6738" for this suite.


• [SLOW TEST:57.391 seconds]
[k8s.io] InitContainer [NodeConformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":5,"skipped":22,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:53:27.211: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-5870
... skipping 56 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and write from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:233
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":8,"skipped":52,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":6,"skipped":22,"failed":0}

SSSSSSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":9,"skipped":34,"failed":0}
[BeforeEach] [k8s.io] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:53:45.581: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-4868
... skipping 19 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:290
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:53:56.764: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:53:56.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 58 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":6,"skipped":47,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:00.209: INFO: Distro gci doesn't support ntfs -- skipping
... skipping 81 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":6,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:03.340: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:54:03.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 137 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  load AppArmor profiles
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31
    can disable an AppArmor profile, using unconfined
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined","total":-1,"completed":4,"skipped":24,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":7,"skipped":78,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:53:17.555: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 46 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":8,"skipped":78,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 118 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192

      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:688
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:52:24.827: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 200 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:08.738: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:54:08.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 81 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:10.011: INFO: Driver nfs doesn't support ext3 -- skipping
... skipping 81 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203

      Driver gluster doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: http","total":-1,"completed":5,"skipped":19,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:54:04.406: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2317
... skipping 18 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl create quota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2141
    should create a quota with scopes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2171
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":6,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:10.090: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 215 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:240
    should require VolumeAttach for drivers with attachment
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:262
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":2,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:10.818: INFO: Driver vsphere doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:54:10.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 114 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (ext3)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:11.007: INFO: Distro gci doesn't support ntfs -- skipping
... skipping 129 lines ...
• [SLOW TEST:46.221 seconds]
[sig-storage] PVC Protection
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:137
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":4,"skipped":18,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 41 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:11.656: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:54:11.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver emptydir doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 73 lines ...
• [SLOW TEST:12.585 seconds]
[sig-auth] Metadata Concealment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should run a check-metadata-concealment job to completion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/metadata_concealment.go:34
------------------------------
{"msg":"PASSED [sig-auth] Metadata Concealment should run a check-metadata-concealment job to completion","total":-1,"completed":9,"skipped":85,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:17.028: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 108 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":53,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:17.864: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 37 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":5,"skipped":7,"failed":0}
[BeforeEach] [sig-auth] PodSecurityPolicy
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:54:16.820: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename podsecuritypolicy
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:54:20.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podsecuritypolicy-620" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] PodSecurityPolicy should forbid pod creation when no PSP is available","total":-1,"completed":6,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:20.888: INFO: Only supported for providers [vsphere] (not gce)
... skipping 48 lines ...
• [SLOW TEST:12.499 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:23.543: INFO: Distro gci doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:54:23.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 104 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should create and stop a replication controller  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":7,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:24.893: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 81 lines ...
Jan 16 06:53:50.795: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(gcepd) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-expand-8942-gcepd-scxfm6x
STEP: creating a claim
Jan 16 06:53:51.030: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
Jan 16 06:53:51.877: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
Jan 16 06:53:52.152: INFO: Error updating pvc gcepdkrrbn: PersistentVolumeClaim "gcepdkrrbn" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 06:53:54.925: INFO: Error updating pvc gcepdkrrbn: PersistentVolumeClaim "gcepdkrrbn" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 06:53:56.945: INFO: Error updating pvc gcepdkrrbn: PersistentVolumeClaim "gcepdkrrbn" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 06:53:59.055: INFO: Error updating pvc gcepdkrrbn: PersistentVolumeClaim "gcepdkrrbn" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 06:54:00.627: INFO: Error updating pvc gcepdkrrbn: PersistentVolumeClaim "gcepdkrrbn" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 06:54:03.034: INFO: Error updating pvc gcepdkrrbn: PersistentVolumeClaim "gcepdkrrbn" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 06:54:04.635: INFO: Error updating pvc gcepdkrrbn: PersistentVolumeClaim "gcepdkrrbn" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 06:54:07.260: INFO: Error updating pvc gcepdkrrbn: PersistentVolumeClaim "gcepdkrrbn" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 06:54:08.751: INFO: Error updating pvc gcepdkrrbn: PersistentVolumeClaim "gcepdkrrbn" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 06:54:10.574: INFO: Error updating pvc gcepdkrrbn: PersistentVolumeClaim "gcepdkrrbn" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 06:54:12.558: INFO: Error updating pvc gcepdkrrbn: PersistentVolumeClaim "gcepdkrrbn" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 06:54:14.488: INFO: Error updating pvc gcepdkrrbn: PersistentVolumeClaim "gcepdkrrbn" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 06:54:16.646: INFO: Error updating pvc gcepdkrrbn: PersistentVolumeClaim "gcepdkrrbn" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 06:54:18.994: INFO: Error updating pvc gcepdkrrbn: PersistentVolumeClaim "gcepdkrrbn" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 06:54:20.642: INFO: Error updating pvc gcepdkrrbn: PersistentVolumeClaim "gcepdkrrbn" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 06:54:23.285: INFO: Error updating pvc gcepdkrrbn: PersistentVolumeClaim "gcepdkrrbn" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
Jan 16 06:54:23.783: INFO: Error updating pvc gcepdkrrbn: PersistentVolumeClaim "gcepdkrrbn" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
STEP: Deleting pvc
Jan 16 06:54:23.783: INFO: Deleting PersistentVolumeClaim "gcepdkrrbn"
STEP: Deleting sc
Jan 16 06:54:24.567: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 90 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:444
    that expects a client request
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:445
      should support a client that connects, sends NO DATA, and disconnects
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:446
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":3,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:26.371: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:54:26.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 110 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":6,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:54:25.898: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 98 lines ...
• [SLOW TEST:16.896 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:457
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage]","total":-1,"completed":10,"skipped":94,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:33.950: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:54:33.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 47 lines ...
• [SLOW TEST:8.611 seconds]
[sig-auth] PodSecurityPolicy
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should enforce the restricted policy.PodSecurityPolicy
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:85
------------------------------
{"msg":"PASSED [sig-auth] PodSecurityPolicy should enforce the restricted policy.PodSecurityPolicy","total":-1,"completed":4,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:36.858: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:54:36.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 85 lines ...
• [SLOW TEST:11.148 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":8,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
... skipping 65 lines ...
Jan 16 06:54:08.361: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:08.749: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:10.052: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:10.329: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:10.578: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:10.853: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:11.470: INFO: Lookups using dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2668.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2668.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local jessie_udp@dns-test-service-2.dns-2668.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2668.svc.cluster.local]

Jan 16 06:54:16.757: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:17.054: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:17.730: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:18.262: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:19.839: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:20.199: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:20.482: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:20.810: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:21.830: INFO: Lookups using dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2668.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2668.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local jessie_udp@dns-test-service-2.dns-2668.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2668.svc.cluster.local]

Jan 16 06:54:27.059: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:27.286: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:27.458: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:27.558: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:28.045: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:28.240: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:28.533: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:28.907: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:29.486: INFO: Lookups using dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2668.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2668.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local jessie_udp@dns-test-service-2.dns-2668.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2668.svc.cluster.local]

Jan 16 06:54:31.747: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:31.962: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:32.295: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2668.svc.cluster.local from pod dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472: the server could not find the requested resource (get pods dns-test-a60674fe-7002-4eed-9398-caf03d33b472)
Jan 16 06:54:35.258: INFO: Lookups using dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2668.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2668.svc.cluster.local]

Jan 16 06:54:38.928: INFO: DNS probes using dns-2668/dns-test-a60674fe-7002-4eed-9398-caf03d33b472 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
• [SLOW TEST:46.642 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":9,"skipped":59,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:93
Jan 16 06:54:41.118: INFO: Driver "nfs" does not support block volume mode - skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
... skipping 105 lines ...
• [SLOW TEST:29.101 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":8,"skipped":57,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":42,"failed":0}
[BeforeEach] [sig-storage] GCP Volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:54:05.832: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename gcp-volume
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gcp-volume-1554
... skipping 31 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:56
  GlusterFS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:124
    should be mountable
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:125
------------------------------
{"msg":"PASSED [sig-storage] GCP Volumes GlusterFS should be mountable","total":-1,"completed":10,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:47.988: INFO: Driver emptydir doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:54:47.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 114 lines ...
• [SLOW TEST:12.260 seconds]
[sig-storage] Projected configMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:73
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":5,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Mount propagation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 64 lines ...
Jan 16 06:53:38.122: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:53:41.126: INFO: Exec stderr: ""
Jan 16 06:53:47.071: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-4077"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-4077"/host; echo host > "/var/lib/kubelet/mount-propagation-4077"/host/file] Namespace:mount-propagation-4077 PodName:hostexec-bootstrap-e2e-minion-group-5wcz-mpnjh ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 16 06:53:47.071: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:53:48.359: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4077 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:53:48.359: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:53:49.738: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Jan 16 06:53:49.990: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4077 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:53:49.990: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:53:52.102: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jan 16 06:53:52.207: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4077 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:53:52.207: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:53:54.060: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jan 16 06:53:54.356: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4077 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:53:54.356: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:53:56.210: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jan 16 06:53:56.567: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4077 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:53:56.567: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:53:58.414: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Jan 16 06:53:58.666: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4077 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:53:58.666: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:54:00.626: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Jan 16 06:54:00.838: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4077 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:54:00.838: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:54:02.433: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Jan 16 06:54:02.745: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4077 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:54:02.745: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:54:04.400: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jan 16 06:54:04.630: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4077 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:54:04.630: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:54:06.724: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jan 16 06:54:07.270: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4077 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:54:07.270: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:54:09.910: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Jan 16 06:54:10.223: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4077 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:54:10.223: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:54:11.609: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Jan 16 06:54:11.796: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4077 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:54:11.796: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:54:13.314: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jan 16 06:54:13.465: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4077 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:54:13.465: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:54:14.350: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Jan 16 06:54:14.614: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4077 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:54:14.615: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:54:16.578: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jan 16 06:54:16.751: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4077 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:54:16.751: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:54:19.326: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Jan 16 06:54:19.723: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4077 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:54:19.723: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:54:23.003: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Jan 16 06:54:23.271: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4077 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:54:23.271: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:54:26.125: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jan 16 06:54:26.233: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4077 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:54:26.234: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:54:28.067: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jan 16 06:54:28.189: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4077 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:54:28.189: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:54:30.328: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Jan 16 06:54:30.459: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4077 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 16 06:54:30.459: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:54:31.560: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Jan 16 06:54:31.560: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-4077"/master/file` = master] Namespace:mount-propagation-4077 PodName:hostexec-bootstrap-e2e-minion-group-5wcz-mpnjh ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 16 06:54:31.560: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:54:34.456: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-4077"/slave/file] Namespace:mount-propagation-4077 PodName:hostexec-bootstrap-e2e-minion-group-5wcz-mpnjh ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 16 06:54:34.456: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 16 06:54:38.322: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-4077"/host] Namespace:mount-propagation-4077 PodName:hostexec-bootstrap-e2e-minion-group-5wcz-mpnjh ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 16 06:54:38.322: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 21 lines ...
• [SLOW TEST:149.528 seconds]
[k8s.io] [sig-node] Mount propagation
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should propagate mounts to the host
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":5,"skipped":67,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:57.308: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:54:57.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 12 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:54:05.609: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 61 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:57.715: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 86 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should resize volume when PVC is edited while pod is using it
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:220
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":2,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:58.177: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:54:58.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 131 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:54:58.498: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:54:58.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 98 lines ...
• [SLOW TEST:52.901 seconds]
[sig-api-machinery] Aggregator
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":6,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:03.976: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 42 lines ...
• [SLOW TEST:6.138 seconds]
[sig-api-machinery] Watchers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":3,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:04.328: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 64 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":10,"skipped":64,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:04.759: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 128 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":11,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:06.389: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 15 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":5,"skipped":34,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:55:03.828: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7729
... skipping 18 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:55:08.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7729" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":-1,"completed":6,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [sig-autoscaling] DNS horizontal autoscaling
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 107 lines ...
• [SLOW TEST:63.378 seconds]
[sig-storage] Mounted volume expand
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Should verify mounted devices can be resized
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:115
------------------------------
{"msg":"PASSED [sig-storage] Mounted volume expand Should verify mounted devices can be resized","total":-1,"completed":8,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:12.125: INFO: Distro gci doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:55:12.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 60 lines ...
      Only supported for providers [aws] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1645
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":30,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:54:03.653: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-4458
... skipping 9 lines ...
Jan 16 06:54:10.287: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-pv4kc] to have phase Bound
Jan 16 06:54:10.538: INFO: PersistentVolumeClaim pvc-pv4kc found but phase is Pending instead of Bound.
Jan 16 06:54:12.686: INFO: PersistentVolumeClaim pvc-pv4kc found and phase=Bound (2.399497989s)
Jan 16 06:54:12.686: INFO: Waiting up to 3m0s for PersistentVolume gce-lwvhg to have phase Bound
Jan 16 06:54:12.846: INFO: PersistentVolume gce-lwvhg found and phase=Bound (159.895127ms)
STEP: Creating the Client Pod
[It] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:139
STEP: Deleting the Persistent Volume
Jan 16 06:54:33.963: INFO: Deleting PersistentVolume "gce-lwvhg"
STEP: Deleting the client pod
Jan 16 06:54:34.359: INFO: Deleting pod "pvc-tester-2h9nz" in namespace "pv-4458"
Jan 16 06:54:35.171: INFO: Wait up to 5m0s for pod "pvc-tester-2h9nz" to be fully deleted
... skipping 14 lines ...
Jan 16 06:55:13.729: INFO: Successfully deleted PD "bootstrap-e2e-56a70004-e05c-46f3-876a-00fac3417b19".


• [SLOW TEST:70.076 seconds]
[sig-storage] PersistentVolumes GCEPD
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:139
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes GCEPD should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach","total":-1,"completed":6,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:13.732: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:55:13.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 12 lines ...
      Only supported for providers [aws] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1645
------------------------------
SSSSSSSSS
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [k8s.io] Probing container
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:50:57.470: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-9200
... skipping 16 lines ...
• [SLOW TEST:256.288 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:167
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]","total":-1,"completed":2,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for API chunking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 79 lines ...
• [SLOW TEST:37.668 seconds]
[sig-api-machinery] Servers with support for API chunking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return chunks of results for list calls
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:77
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":9,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:15.819: INFO: Only supported for providers [azure] (not gce)
... skipping 186 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":59,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:16.845: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 34 lines ...
Jan 16 06:54:19.833: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-smgfj] to have phase Bound
Jan 16 06:54:20.190: INFO: PersistentVolumeClaim pvc-smgfj found but phase is Pending instead of Bound.
Jan 16 06:54:22.676: INFO: PersistentVolumeClaim pvc-smgfj found and phase=Bound (2.843822134s)
Jan 16 06:54:22.676: INFO: Waiting up to 3m0s for PersistentVolume gce-pl8fh to have phase Bound
Jan 16 06:54:22.943: INFO: PersistentVolume gce-pl8fh found and phase=Bound (267.003128ms)
STEP: Creating the Client Pod
[It] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:124
STEP: Deleting the Claim
Jan 16 06:54:50.353: INFO: Deleting PersistentVolumeClaim "pvc-smgfj"
STEP: Deleting the Pod
Jan 16 06:54:50.678: INFO: Deleting pod "pvc-tester-lgmhd" in namespace "pv-5920"
Jan 16 06:54:50.786: INFO: Wait up to 5m0s for pod "pvc-tester-lgmhd" to be fully deleted
... skipping 14 lines ...
Jan 16 06:55:19.025: INFO: Successfully deleted PD "bootstrap-e2e-d6efdc99-5061-4d3c-a7a2-977d6b4cb2f1".


• [SLOW TEST:65.004 seconds]
[sig-storage] PersistentVolumes GCEPD
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:124
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes GCEPD should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach","total":-1,"completed":4,"skipped":13,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 24 lines ...
• [SLOW TEST:12.817 seconds]
[k8s.io] [sig-node] Security Context
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:88
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":7,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:21.513: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 84 lines ...
• [SLOW TEST:9.751 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  custom resource defaulting for requests and from storage works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":9,"skipped":64,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 66 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [k8s.io] GlusterDynamicProvisioner
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should create and delete persistent volumes [fast]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:747
------------------------------
{"msg":"PASSED [sig-storage] Dynamic Provisioning [k8s.io] GlusterDynamicProvisioner should create and delete persistent volumes [fast]","total":-1,"completed":7,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:22.137: INFO: Driver vsphere doesn't support ext3 -- skipping
... skipping 87 lines ...
• [SLOW TEST:255.091 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:24.264: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:55:24.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 83 lines ...
• [SLOW TEST:27.386 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":69,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:24.703: INFO: Only supported for providers [vsphere] (not gce)
... skipping 138 lines ...
• [SLOW TEST:121.108 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to up and down services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:968
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":4,"skipped":32,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:25.852: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 261 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should update endpoints: http
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:217
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should update endpoints: http","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:28.169: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:55:28.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 140 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":7,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:28.236: INFO: Driver local doesn't support ntfs -- skipping
... skipping 110 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":26,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 29 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43
    volume on default medium should have the correct mode using FSGroup
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:66
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":5,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:34.357: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:55:34.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 77 lines ...
• [SLOW TEST:30.866 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":4,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:35.204: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:55:35.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 240 lines ...
Jan 16 06:55:23.781: INFO: Waiting for PV local-pvjq8f6 to bind to PVC pvc-qdd65
Jan 16 06:55:23.781: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-qdd65] to have phase Bound
Jan 16 06:55:24.158: INFO: PersistentVolumeClaim pvc-qdd65 found but phase is Pending instead of Bound.
Jan 16 06:55:26.827: INFO: PersistentVolumeClaim pvc-qdd65 found and phase=Bound (3.046332213s)
Jan 16 06:55:26.827: INFO: Waiting up to 3m0s for PersistentVolume local-pvjq8f6 to have phase Bound
Jan 16 06:55:27.195: INFO: PersistentVolume local-pvjq8f6 found and phase=Bound (367.678898ms)
[It] should fail scheduling due to different NodeAffinity
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:360
STEP: local-volume-type: dir
STEP: Initializing test volumes
Jan 16 06:55:27.729: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1c207039-1221-43c6-b727-760cbe13431e] Namespace:persistent-local-volumes-test-5215 PodName:hostexec-bootstrap-e2e-minion-group-5wcz-jvv7n ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 16 06:55:27.729: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Creating local PVCs and PVs
... skipping 23 lines ...

• [SLOW TEST:21.820 seconds]
[sig-storage] PersistentVolumes-local 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:338
    should fail scheduling due to different NodeAffinity
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:360
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:55:21.907: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 25 lines ...
• [SLOW TEST:14.535 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":66,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-autoscaling] DNS horizontal autoscaling [DisabledForLargeClusters] kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios","total":-1,"completed":11,"skipped":95,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:55:10.395: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5823
... skipping 59 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1462
    should be able to retrieve and filter logs  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":12,"skipped":95,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:10.974 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should get a host IP [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:39.172: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 165 lines ...
• [SLOW TEST:27.879 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support pod readiness gates [NodeFeature:PodReadinessGate]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:789
------------------------------
{"msg":"PASSED [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":3,"skipped":8,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:41.647: INFO: Only supported for providers [openstack] (not gce)
... skipping 172 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      Verify if offline PVC expansion works
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:162
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":5,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:45.672: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:55:45.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 154 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":11,"skipped":59,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:53.504: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 146 lines ...
• [SLOW TEST:191.073 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not disrupt a cloud load-balancer's connectivity during rollout
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:128
------------------------------
{"msg":"PASSED [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","total":-1,"completed":3,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 83 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":7,"skipped":40,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:55:35.571: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-327
... skipping 26 lines ...
• [SLOW TEST:19.210 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":40,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:54.833: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 61 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1895
    should update a single-container pod's image  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":5,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:55:55.854: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 15 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":12,"skipped":39,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:55:51.851: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-4820
... skipping 62 lines ...
Jan 16 06:55:13.637: INFO: creating *v1.StatefulSet: csi-mock-volumes-1400/csi-mockplugin
Jan 16 06:55:13.886: INFO: creating *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-1400
Jan 16 06:55:14.444: INFO: creating *v1.StatefulSet: csi-mock-volumes-1400/csi-mockplugin-attacher
Jan 16 06:55:14.693: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1400"
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
Jan 16 06:55:29.140: INFO: Error getting logs for pod csi-inline-volume-z2wd2: the server rejected our request for an unknown reason (get pods csi-inline-volume-z2wd2)
STEP: Deleting pod csi-inline-volume-z2wd2 in namespace csi-mock-volumes-1400
STEP: Deleting the previously created pod
Jan 16 06:55:34.171: INFO: Deleting pod "pvc-volume-tester-t6j7p" in namespace "csi-mock-volumes-1400"
Jan 16 06:55:34.370: INFO: Wait up to 5m0s for pod "pvc-volume-tester-t6j7p" to be fully deleted
WARNING: pod log: pvc-volume-tester-t6j7p/volume-tester: pods "pvc-volume-tester-t6j7p" not found
STEP: Checking CSI driver logs
Jan 16 06:55:49.377: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-1400","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-1400","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-1400","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-1400","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"csi-428cf1cdb89c3d8dfcf082289648b3f862446bacc5933bafb9155adb14c4f010","target_path":"/var/lib/kubelet/pods/fe8d8756-972e-4df8-bf64-3332c4f1b26d/volumes/kubernetes.io~csi/my-volume/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"true","csi.storage.k8s.io/pod.name":"pvc-volume-tester-t6j7p","csi.storage.k8s.io/pod.namespace":"csi-mock-volumes-1400","csi.storage.k8s.io/pod.uid":"fe8d8756-972e-4df8-bf64-3332c4f1b26d","csi.storage.k8s.io/serviceAccount.name":"default"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"csi-428cf1cdb89c3d8dfcf082289648b3f862446bacc5933bafb9155adb14c4f010","volume_path":"/var/lib/kubelet/pods/fe8d8756-972e-4df8-bf64-3332c4f1b26d/volumes/kubernetes.io~csi/my-volume/mount"},"Response":null,"Error":"rpc error: code = NotFound desc = csi-428cf1cdb89c3d8dfcf082289648b3f862446bacc5933bafb9155adb14c4f010"}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-428cf1cdb89c3d8dfcf082289648b3f862446bacc5933bafb9155adb14c4f010","target_path":"/var/lib/kubelet/pods/fe8d8756-972e-4df8-bf64-3332c4f1b26d/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":""}

Jan 16 06:55:49.377: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-t6j7p
Jan 16 06:55:49.377: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-1400
Jan 16 06:55:49.377: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: fe8d8756-972e-4df8-bf64-3332c4f1b26d
Jan 16 06:55:49.377: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
Jan 16 06:55:49.377: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
... skipping 38 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:296
    contain ephemeral=true when using inline volume
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:346
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":11,"skipped":84,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":6,"skipped":25,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:55:54.742: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3237
... skipping 22 lines ...
• [SLOW TEST:9.513 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:04.258: INFO: Driver local doesn't support ext4 -- skipping
... skipping 47 lines ...
• [SLOW TEST:26.721 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:151
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage =\u003e should allow an eviction","total":-1,"completed":3,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:05.906: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:56:05.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 60 lines ...
• [SLOW TEST:31.681 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:46
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":6,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:06.052: INFO: Driver hostPath doesn't support ext4 -- skipping
... skipping 118 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":7,"skipped":49,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] CronJob
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:103.740 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:195
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":6,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 61 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":8,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:09.513: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:56:09.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 69 lines ...
• [SLOW TEST:42.231 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":8,"skipped":20,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":39,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:55:58.696: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-4775
... skipping 25 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43
    new files should be created with FSGroup ownership when container is non-root
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:54
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}
[BeforeEach] [sig-storage] Zone Support
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:56:10.348: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename zone-support
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in zone-support-6997
... skipping 31 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 28 lines ...
• [SLOW TEST:9.822 seconds]
[sig-storage] Projected combined
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 87 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:240
    should not require VolumeAttach for drivers without attachment
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:262
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":8,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:16.434: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:56:16.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 89 lines ...
• [SLOW TEST:10.563 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:16.484: INFO: Driver nfs doesn't support ntfs -- skipping
... skipping 174 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":7,"skipped":74,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
... skipping 27 lines ...
Jan 16 06:56:01.969: INFO: Trying to get logs from node bootstrap-e2e-minion-group-5wcz pod exec-volume-test-inlinevolume-klw5 container exec-container-inlinevolume-klw5: <nil>
STEP: delete the pod
Jan 16 06:56:03.343: INFO: Waiting for pod exec-volume-test-inlinevolume-klw5 to disappear
Jan 16 06:56:03.562: INFO: Pod exec-volume-test-inlinevolume-klw5 no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-klw5
Jan 16 06:56:03.562: INFO: Deleting pod "exec-volume-test-inlinevolume-klw5" in namespace "volume-2672"
Jan 16 06:56:04.770: INFO: error deleting PD "bootstrap-e2e-6688dc11-fccd-40c8-8cf3-b052d287af8a": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-6688dc11-fccd-40c8-8cf3-b052d287af8a' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-5wcz', resourceInUseByAnotherResource
Jan 16 06:56:04.770: INFO: Couldn't delete PD "bootstrap-e2e-6688dc11-fccd-40c8-8cf3-b052d287af8a", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-6688dc11-fccd-40c8-8cf3-b052d287af8a' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-5wcz', resourceInUseByAnotherResource
Jan 16 06:56:10.683: INFO: error deleting PD "bootstrap-e2e-6688dc11-fccd-40c8-8cf3-b052d287af8a": googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-6688dc11-fccd-40c8-8cf3-b052d287af8a' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-5wcz', resourceInUseByAnotherResource
Jan 16 06:56:10.683: INFO: Couldn't delete PD "bootstrap-e2e-6688dc11-fccd-40c8-8cf3-b052d287af8a", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-6688dc11-fccd-40c8-8cf3-b052d287af8a' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-5wcz', resourceInUseByAnotherResource
Jan 16 06:56:17.630: INFO: Successfully deleted PD "bootstrap-e2e-6688dc11-fccd-40c8-8cf3-b052d287af8a".
Jan 16 06:56:17.630: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:56:17.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-2672" for this suite.
... skipping 164 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":7,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:19.633: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:56:19.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 86 lines ...
• [SLOW TEST:19.034 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a private image
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:97
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a private image","total":-1,"completed":8,"skipped":27,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":14,"skipped":39,"failed":0}
[BeforeEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:56:12.486: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-9569
... skipping 39 lines ...
• [SLOW TEST:10.899 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":15,"skipped":39,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:23.416: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 181 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":8,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:27.206: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:56:18.477: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9773
... skipping 20 lines ...
• [SLOW TEST:8.882 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:27.364: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:56:27.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 36 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":5,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:93
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
... skipping 146 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : configmap
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":6,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:27.955: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 36 lines ...
STEP: Destroying namespace "services-6535" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:692

•
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces","total":-1,"completed":9,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:28.913: INFO: Only supported for providers [aws] (not gce)
... skipping 280 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541
    should add annotations for pods in rc  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":5,"skipped":38,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":5,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:33.304: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:56:33.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 145 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":6,"skipped":22,"failed":0}
[BeforeEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:56:27.403: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-2806
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 16 06:56:33.872: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
... skipping 9 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:34.914: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:56:34.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 29 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: nfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver "nfs" does not support topology - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:96
------------------------------
... skipping 155 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:419
    should expand volume by restarting pod if attach=on, nodeExpansion=on
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:448
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":4,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:35.918: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:56:35.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 12 lines ...
      Only supported for providers [aws] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1645
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":11,"skipped":78,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:55:43.304: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 53 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":12,"skipped":78,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 68 lines ...
• [SLOW TEST:5.029 seconds]
[sig-node] RuntimeClass
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:39
  should reject a Pod requesting a non-existent RuntimeClass
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:42
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass","total":-1,"completed":8,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:12.953 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":59,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:42.016: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:56:42.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 87 lines ...
• [SLOW TEST:161.233 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not emit unexpected warnings
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:172
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not emit unexpected warnings","total":-1,"completed":7,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:44.613: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 36 lines ...
• [SLOW TEST:64.110 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:87
Jan 16 06:56:45.761: INFO: Driver "nfs" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
... skipping 165 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":-1,"completed":5,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:51.640: INFO: Driver gluster doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:56:51.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 159 lines ...
• [SLOW TEST:42.304 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:116
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":9,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:52.797: INFO: Driver emptydir doesn't support ext3 -- skipping
... skipping 80 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":51,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 211 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Only supported for providers [openstack] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1080
------------------------------
... skipping 123 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43
    files with FSGroup ownership should support (root,0644,tmpfs)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:62
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":11,"skipped":66,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:56:56.221: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:56:56.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 81 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:56:57.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8923" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":16,"skipped":63,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":12,"skipped":88,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:56:54.865: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 46 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":10,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:56:26.774: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 73 lines ...
Jan 16 06:56:59.643: INFO: stderr: ""
Jan 16 06:56:59.643: INFO: stdout: "etcd-0 etcd-1 scheduler controller-manager"
STEP: getting details of componentstatuses
STEP: getting status of etcd-0
Jan 16 06:56:59.643: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config get componentstatuses etcd-0'
Jan 16 06:57:00.500: INFO: stderr: ""
Jan 16 06:57:00.500: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-0   Healthy   {\"health\":\"true\"}   \n"
STEP: getting status of etcd-1
Jan 16 06:57:00.501: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config get componentstatuses etcd-1'
Jan 16 06:57:01.469: INFO: stderr: ""
Jan 16 06:57:01.469: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-1   Healthy   {\"health\":\"true\"}   \n"
STEP: getting status of scheduler
Jan 16 06:57:01.469: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config get componentstatuses scheduler'
Jan 16 06:57:02.238: INFO: stderr: ""
Jan 16 06:57:02.238: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
STEP: getting status of controller-manager
Jan 16 06:57:02.238: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config get componentstatuses controller-manager'
Jan 16 06:57:03.198: INFO: stderr: ""
Jan 16 06:57:03.199: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:57:03.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9120" for this suite.


... skipping 2 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl get componentstatuses
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:909
    should get componentstatuses
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:910
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":12,"skipped":78,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:03.829: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:57:03.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 52 lines ...
• [SLOW TEST:30.629 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":6,"skipped":41,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:03.955: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 50 lines ...
• [SLOW TEST:12.699 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:90
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":29,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:04.376: INFO: Driver local doesn't support ntfs -- skipping
... skipping 49 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":9,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 16 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command in a pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:128.454 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should schedule multiple jobs concurrently
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:61
------------------------------
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently","total":-1,"completed":5,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:06.178: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:57:06.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 10 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:173

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":10,"skipped":62,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:55:22.133: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 137 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":11,"skipped":62,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:06.432: INFO: Only supported for providers [openstack] (not gce)
... skipping 168 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:09.170: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 66 lines ...
• [SLOW TEST:8.494 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":10,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:13.080: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:57:13.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 26 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:63
[It] should support unsafe sysctls which are actually whitelisted
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:110
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:10.127 seconds]
[k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support unsafe sysctls which are actually whitelisted
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:110
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":-1,"completed":13,"skipped":79,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:13.962: INFO: Driver hostPathSymlink doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:57:13.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 80 lines ...
• [SLOW TEST:34.618 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support cascading deletion of custom resources
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:870
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":-1,"completed":9,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:14.582: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 97 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":9,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:16.003: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 87 lines ...
• [SLOW TEST:53.112 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:855
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":9,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:16.427: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:57:16.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 96 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (ext4)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext4)] volumes should store data","total":-1,"completed":5,"skipped":60,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:18.420: INFO: Only supported for providers [azure] (not gce)
... skipping 41 lines ...
• [SLOW TEST:16.739 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:21.121: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:57:21.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 106 lines ...
      Driver emptydir doesn't support PreprovisionedPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":11,"skipped":41,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:57:01.819: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-230
... skipping 26 lines ...
• [SLOW TEST:20.027 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 68 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and read from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":7,"skipped":34,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:22.255: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 53 lines ...
• [SLOW TEST:9.679 seconds]
[sig-storage] PV Protection
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify "immediate" deletion of a PV that is not bound to a PVC
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:98
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":10,"skipped":33,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:27.105 seconds]
[sig-storage] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":65,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:24.830: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 39 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":4,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:93
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
... skipping 115 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should resize volume when PVC is edited while pod is using it
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:220
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":5,"skipped":16,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 52 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":7,"skipped":36,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:27.655: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 64 lines ...
• [SLOW TEST:13.479 seconds]
[k8s.io] InitContainer [NodeConformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":10,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:29.496: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:57:29.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 60 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  [k8s.io] Pods Set QOS Class
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":11,"skipped":44,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:30.452: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 135 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":12,"skipped":71,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 38 lines ...
Jan 16 06:57:09.731: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config explain e2e-test-crd-publish-openapi-4868-crds.spec'
Jan 16 06:57:10.588: INFO: stderr: ""
Jan 16 06:57:10.588: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4868-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jan 16 06:57:10.589: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config explain e2e-test-crd-publish-openapi-4868-crds.spec.bars'
Jan 16 06:57:11.514: INFO: stderr: ""
Jan 16 06:57:11.514: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4868-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jan 16 06:57:11.514: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config explain e2e-test-crd-publish-openapi-4868-crds.spec.bars2'
Jan 16 06:57:12.162: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:57:30.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6615" for this suite.
... skipping 2 lines ...
• [SLOW TEST:51.754 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":13,"skipped":83,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:31.212: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 68 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when running a container with a new image
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:263
      should not be able to pull from private registry without secret [NodeConformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:380
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":6,"skipped":66,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:32.838: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 150 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 40 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:214

      Driver gluster doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":6,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 16 06:56:39.529: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 63 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":7,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:36.158: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:57:36.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 151 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":7,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 16 06:57:37.826: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 16 06:57:37.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 33142 lines ...






2e-minion-group-n0jl\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"volume-1444\": no relationship found between node \"bootstrap-e2e-minion-group-n0jl\" and this object\nvolume-1444                          5m18s       Normal    Pulled                               pod/nfs-client                                                                   Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-1444                          5m18s       Normal    Created                              pod/nfs-client                                                                   Created container nfs-client\nvolume-1444                          5m17s       Normal    Started                              pod/nfs-client                                                                   Started container nfs-client\nvolume-1444                          5m7s        Normal    Killing                              pod/nfs-client                                                                   Stopping container nfs-client\nvolume-1444                          6m6s        Normal    Scheduled                            pod/nfs-injector                                                                 Successfully assigned volume-1444/nfs-injector to bootstrap-e2e-minion-group-mnwl\nvolume-1444                          6m1s        Normal    Pulled                               pod/nfs-injector                                                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-1444                          6m1s        Normal    Created                              pod/nfs-injector                                                                 Created container nfs-injector\nvolume-1444                          5m59s       Normal    Started                              pod/nfs-injector                                                                 Started container nfs-injector\nvolume-1444                          5m46s       Normal    Killing                              pod/nfs-injector                                                                 Stopping container nfs-injector\nvolume-1444                          6m11s       Normal    ExternalProvisioning                 persistentvolumeclaim/nfsvzcqq                                                   waiting for a volume to be created, either by external provisioner \"example.com/nfs-volume-1444\" or manually created by system administrator\nvolume-1444                          6m11s       Normal    Provisioning                         persistentvolumeclaim/nfsvzcqq                                                   External provisioner is provisioning volume for claim \"volume-1444/nfsvzcqq\"\nvolume-1444                          6m11s       Normal    ProvisioningSucceeded                persistentvolumeclaim/nfsvzcqq                                                   Successfully provisioned volume pvc-d4958573-18b7-4b43-bc81-5724cd60f793\nvolume-1513                          2m7s        Normal    LeaderElection                       endpoints/example.com-nfs-volume-1513                                            external-provisioner-4vkx9_e7e2ab6f-c9d8-4bd4-a0e4-0a6f14fadab2 became leader\nvolume-1513                          90s         Normal    Scheduled                            pod/exec-volume-test-preprovisionedpv-gj4p                                       Successfully assigned volume-1513/exec-volume-test-preprovisionedpv-gj4p to bootstrap-e2e-minion-group-n0jl\nvolume-1513                          86s         Normal    Pulled                               pod/exec-volume-test-preprovisionedpv-gj4p                                       Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-1513                          86s         Normal    Created                              pod/exec-volume-test-preprovisionedpv-gj4p                                       Created container exec-container-preprovisionedpv-gj4p\nvolume-1513                          85s         Normal    Started                              pod/exec-volume-test-preprovisionedpv-gj4p                                       Started container exec-container-preprovisionedpv-gj4p\nvolume-1513                          2m19s       Normal    Scheduled                            pod/external-provisioner-4vkx9                                                   Successfully assigned volume-1513/external-provisioner-4vkx9 to bootstrap-e2e-minion-group-5wcz\nvolume-1513                          2m14s       Normal    Pulled                               pod/external-provisioner-4vkx9                                                   Container image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\" already present on machine\nvolume-1513                          2m14s       Normal    Created                              pod/external-provisioner-4vkx9                                                   Created container nfs-provisioner\nvolume-1513                          2m14s       Normal    Started                              pod/external-provisioner-4vkx9                                                   Started container nfs-provisioner\nvolume-1513                          63s         Normal    Killing                              pod/external-provisioner-4vkx9                                                   Stopping container nfs-provisioner\nvolume-1513                          118s        Normal    Scheduled                            pod/nfs-server                                                                   Successfully assigned volume-1513/nfs-server to bootstrap-e2e-minion-group-n0jl\nvolume-1513                          115s        Normal    Pulled                               pod/nfs-server                                                                   Container image \"gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0\" already present on machine\nvolume-1513                          115s        Normal    Created                              pod/nfs-server                                                                   Created container nfs-server\nvolume-1513                          113s        Normal    Started                              pod/nfs-server                                                                   Started container nfs-server\nvolume-1513                          78s         Normal    Killing                              pod/nfs-server                                                                   Stopping container nfs-server\nvolume-1513                          105s        Warning   ProvisioningFailed                   persistentvolumeclaim/pvc-6bpr8                                                  storageclass.storage.k8s.io \"volume-1513\" not found\nvolume-1956                          5m31s       Normal    Scheduled                            pod/gcepd-client                                                                 Successfully assigned volume-1956/gcepd-client to bootstrap-e2e-minion-group-5wcz\nvolume-1956                          5m31s       Warning   FailedAttachVolume                   pod/gcepd-client                                                                 Multi-Attach error for volume \"gcepd-8l4w4\" Volume is already exclusively attached to one node and can't be attached to another\nvolume-1956                          5m17s       Normal    SuccessfulAttachVolume               pod/gcepd-client                                                                 AttachVolume.Attach succeeded for volume \"gcepd-8l4w4\"\nvolume-1956                          5m8s        Normal    Pulled                               pod/gcepd-client                                                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-1956                          5m8s        Normal    Created                              pod/gcepd-client                                                                 Created container gcepd-client\nvolume-1956                          5m6s        Normal    Started                              pod/gcepd-client                                                                 Started container gcepd-client\nvolume-1956                          4m54s       Normal    Killing                              pod/gcepd-client                                                                 Stopping container gcepd-client\nvolume-1956                          6m6s        Normal    Scheduled                            pod/gcepd-injector                                                               Successfully assigned volume-1956/gcepd-injector to bootstrap-e2e-minion-group-mnwl\nvolume-1956                          6m1s        Normal    SuccessfulAttachVolume               pod/gcepd-injector                                                               AttachVolume.Attach succeeded for volume \"gcepd-8l4w4\"\nvolume-1956                          5m52s       Normal    Pulled                               pod/gcepd-injector                                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-1956                          5m52s       Normal    Created                              pod/gcepd-injector                                                               Created container gcepd-injector\nvolume-1956                          5m50s       Normal    Started                              pod/gcepd-injector                                                               Started container gcepd-injector\nvolume-1956                          5m38s       Normal    Killing                              pod/gcepd-injector                                                               Stopping container gcepd-injector\nvolume-1956                          6m19s       Warning   ProvisioningFailed                   persistentvolumeclaim/pvc-lg2kc                                                  storageclass.storage.k8s.io \"volume-1956\" not found\nvolume-2019                          2m59s       Normal    LeaderElection                       endpoints/example.com-nfs-volume-2019                                            external-provisioner-7kdvh_86bc4fce-1d53-412d-9a25-85e756a371d3 became leader\nvolume-2019                          3m10s       Normal    Scheduled                            pod/external-provisioner-7kdvh                                                   Successfully assigned volume-2019/external-provisioner-7kdvh to bootstrap-e2e-minion-group-5wcz\nvolume-2019                          3m9s        Warning   FailedMount                          pod/external-provisioner-7kdvh                                                   MountVolume.SetUp failed for volume \"default-token-tkh8t\" : failed to sync secret cache: timed out waiting for the condition\nvolume-2019                          3m7s        Normal    Pulled                               pod/external-provisioner-7kdvh                                                   Container image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\" already present on machine\nvolume-2019                          3m7s        Normal    Created                              pod/external-provisioner-7kdvh                                                   Created container nfs-provisioner\nvolume-2019                          3m7s        Normal    Started                              pod/external-provisioner-7kdvh                                                   Started container nfs-provisioner\nvolume-2019                          74s         Normal    Killing                              pod/external-provisioner-7kdvh                                                   Stopping container nfs-provisioner\nvolume-2019                          118s        Normal    Scheduled                            pod/nfs-client                                                                   Successfully assigned volume-2019/nfs-client to bootstrap-e2e-minion-group-n0jl\nvolume-2019                          115s        Normal    Pulled                               pod/nfs-client                                                                   Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-2019                          115s        Normal    Created                              pod/nfs-client                                                                   Created container nfs-client\nvolume-2019                          113s        Normal    Started                              pod/nfs-client                                                                   Started container nfs-client\nvolume-2019                          98s         Normal    Killing                              pod/nfs-client                                                                   Stopping container nfs-client\nvolume-2019                          2m30s       Normal    Scheduled                            pod/nfs-injector                                                                 Successfully assigned volume-2019/nfs-injector to bootstrap-e2e-minion-group-mnwl\nvolume-2019                          2m27s       Normal    Pulled                               pod/nfs-injector                                                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-2019                          2m27s       Normal    Created                              pod/nfs-injector                                                                 Created container nfs-injector\nvolume-2019                          2m26s       Normal    Started                              pod/nfs-injector                                                                 Started container nfs-injector\nvolume-2019                          2m10s       Normal    Killing                              pod/nfs-injector                                                                 Stopping container nfs-injector\nvolume-2019                          2m56s       Normal    Scheduled                            pod/nfs-server                                                                   Successfully assigned volume-2019/nfs-server to bootstrap-e2e-minion-group-5wcz\nvolume-2019                          2m55s       Normal    Pulling                              pod/nfs-server                                                                   Pulling image \"gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0\"\nvolume-2019                          2m37s       Normal    Pulled                               pod/nfs-server                                                                   Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0\"\nvolume-2019                          2m37s       Normal    Created                              pod/nfs-server                                                                   Created container nfs-server\nvolume-2019                          2m36s       Normal    Started                              pod/nfs-server                                                                   Started container nfs-server\nvolume-2019                          86s         Normal    Killing                              pod/nfs-server                                                                   Stopping container nfs-server\nvolume-2270                          29s         Normal    Scheduled                            pod/gcepd-injector                                                               Successfully assigned volume-2270/gcepd-injector to bootstrap-e2e-minion-group-n0jl\nvolume-2270                          26s         Normal    SuccessfulAttachVolume               pod/gcepd-injector                                                               AttachVolume.Attach succeeded for volume \"gcepd-4tsvp\"\nvolume-2270                          20s         Normal    Pulled                               pod/gcepd-injector                                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-2270                          20s         Normal    Created                              pod/gcepd-injector                                                               Created container gcepd-injector\nvolume-2270                          20s         Normal    Started                              pod/gcepd-injector                                                               Started container gcepd-injector\nvolume-2270                          5s          Normal    Killing                              pod/gcepd-injector                                                               Stopping container gcepd-injector\nvolume-2270                          41s         Warning   ProvisioningFailed                   persistentvolumeclaim/pvc-x6spm                                                  storageclass.storage.k8s.io \"volume-2270\" not found\nvolume-2574                          5m1s        Normal    Scheduled                            pod/exec-volume-test-preprovisionedpv-kltj                                       Successfully assigned volume-2574/exec-volume-test-preprovisionedpv-kltj to bootstrap-e2e-minion-group-9dh8\nvolume-2574                          5m1s        Warning   FailedMount                          pod/exec-volume-test-preprovisionedpv-kltj                                       Unable to attach or mount volumes: unmounted volumes=[vol1 default-token-zmvmz], unattached volumes=[vol1 default-token-zmvmz]: error processing PVC volume-2574/pvc-8pj4l: failed to fetch PVC from API server: persistentvolumeclaims \"pvc-8pj4l\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-9dh8\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"volume-2574\": no relationship found between node \"bootstrap-e2e-minion-group-9dh8\" and this object\nvolume-2574                          4m57s       Normal    SuccessfulAttachVolume               pod/exec-volume-test-preprovisionedpv-kltj                                       AttachVolume.Attach succeeded for volume \"gcepd-rdzv5\"\nvolume-2574                          4m37s       Normal    Pulled                               pod/exec-volume-test-preprovisionedpv-kltj                                       Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-2574                          4m36s       Normal    Created                              pod/exec-volume-test-preprovisionedpv-kltj                                       Created container exec-container-preprovisionedpv-kltj\nvolume-2574                          4m32s       Normal    Started                              pod/exec-volume-test-preprovisionedpv-kltj                                       Started container exec-container-preprovisionedpv-kltj\nvolume-2574                          5m10s       Warning   ProvisioningFailed                   persistentvolumeclaim/pvc-8pj4l                                                  storageclass.storage.k8s.io \"volume-2574\" not found\nvolume-2639                          3m52s       Normal    Pulled                               pod/csi-hostpath-attacher-0                                                      Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nvolume-2639                          3m52s       Normal    Created                              pod/csi-hostpath-attacher-0                                                      Created container csi-attacher\nvolume-2639                          3m47s       Normal    Started                              pod/csi-hostpath-attacher-0                                                      Started container csi-attacher\nvolume-2639                          2m8s        Warning   FailedMount                          pod/csi-hostpath-attacher-0                                                      MountVolume.SetUp failed for volume \"csi-attacher-token-snxhz\" : secret \"csi-attacher-token-snxhz\" not found\nvolume-2639                          2m8s        Normal    Killing                              pod/csi-hostpath-attacher-0                                                      Stopping container csi-attacher\nvolume-2639                          4m5s        Warning   FailedCreate                         statefulset/csi-hostpath-attacher                                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-2639                          4m5s        Normal    SuccessfulCreate                     statefulset/csi-hostpath-attacher                                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nvolume-2639                          3m53s       Normal    Pulled                               pod/csi-hostpath-provisioner-0                                                   Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nvolume-2639                          3m52s       Normal    Created                              pod/csi-hostpath-provisioner-0                                                   Created container csi-provisioner\nvolume-2639                          3m48s       Normal    Started                              pod/csi-hostpath-provisioner-0                                                   Started container csi-provisioner\nvolume-2639                          110s        Warning   FailedMount                          pod/csi-hostpath-provisioner-0                                                   MountVolume.SetUp failed for volume \"csi-provisioner-token-fpfgr\" : secret \"csi-provisioner-token-fpfgr\" not found\nvolume-2639                          4m5s        Warning   FailedCreate                         statefulset/csi-hostpath-provisioner                                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-2639                          4m4s        Normal    SuccessfulCreate                     statefulset/csi-hostpath-provisioner                                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nvolume-2639                          3m51s       Normal    Pulled                               pod/csi-hostpath-resizer-0                                                       Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nvolume-2639                          3m51s       Normal    Created                              pod/csi-hostpath-resizer-0                                                       Created container csi-resizer\nvolume-2639                          3m47s       Normal    Started                              pod/csi-hostpath-resizer-0                                                       Started container csi-resizer\nvolume-2639                          4m5s        Warning   FailedCreate                         statefulset/csi-hostpath-resizer                                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-2639                          4m5s        Normal    SuccessfulCreate                     statefulset/csi-hostpath-resizer                                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nvolume-2639                          3m54s       Normal    Pulled                               pod/csi-hostpathplugin-0                                                         Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nvolume-2639                          3m54s       Normal    Created                              pod/csi-hostpathplugin-0                                                         Created container node-driver-registrar\nvolume-2639                          3m50s       Normal    Started                              pod/csi-hostpathplugin-0                                                         Started container node-driver-registrar\nvolume-2639                          3m50s       Normal    Pulled                               pod/csi-hostpathplugin-0                                                         Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nvolume-2639                          3m49s       Normal    Created                              pod/csi-hostpathplugin-0                                                         Created container hostpath\nvolume-2639                          3m46s       Normal    Started                              pod/csi-hostpathplugin-0                                                         Started container hostpath\nvolume-2639                          3m46s       Normal    Pulled                               pod/csi-hostpathplugin-0                                                         Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nvolume-2639                          3m46s       Normal    Created                              pod/csi-hostpathplugin-0                                                         Created container liveness-probe\nvolume-2639                          3m41s       Normal    Started                              pod/csi-hostpathplugin-0                                                         Started container liveness-probe\nvolume-2639                          4m6s        Normal    SuccessfulCreate                     statefulset/csi-hostpathplugin                                                   create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nvolume-2639                          3m50s       Normal    ExternalProvisioning                 persistentvolumeclaim/csi-hostpathtddfr                                          waiting for a volume to be created, either by external provisioner \"csi-hostpath-volume-2639\" or manually created by system administrator\nvolume-2639                          3m45s       Normal    Provisioning                         persistentvolumeclaim/csi-hostpathtddfr                                          External provisioner is provisioning volume for claim \"volume-2639/csi-hostpathtddfr\"\nvolume-2639                          3m44s       Normal    ProvisioningSucceeded                persistentvolumeclaim/csi-hostpathtddfr                                          Successfully provisioned volume pvc-6a173132-faea-4b23-bb9a-ead728e9fb48\nvolume-2639                          3m51s       Normal    Pulled                               pod/csi-snapshotter-0                                                            Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nvolume-2639                          3m50s       Normal    Created                              pod/csi-snapshotter-0                                                            Created container csi-snapshotter\nvolume-2639                          3m46s       Normal    Started                              pod/csi-snapshotter-0                                                            Started container csi-snapshotter\nvolume-2639                          2m1s        Warning   FailedMount                          pod/csi-snapshotter-0                                                            MountVolume.SetUp failed for volume \"csi-snapshotter-token-s2gkw\" : secret \"csi-snapshotter-token-s2gkw\" not found\nvolume-2639                          4m5s        Warning   FailedCreate                         statefulset/csi-snapshotter                                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nvolume-2639                          4m5s        Normal    SuccessfulCreate                     statefulset/csi-snapshotter                                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nvolume-2639                          2m53s       Normal    Pulled                               pod/hostpath-client                                                              Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-2639                          2m53s       Normal    Created                              pod/hostpath-client                                                              Created container hostpath-client\nvolume-2639                          2m52s       Normal    Started                              pod/hostpath-client                                                              Started container hostpath-client\nvolume-2639                          2m44s       Normal    Killing                              pod/hostpath-client                                                              Stopping container hostpath-client\nvolume-2639                          3m38s       Normal    SuccessfulAttachVolume               pod/hostpath-injector                                                            AttachVolume.Attach succeeded for volume \"pvc-6a173132-faea-4b23-bb9a-ead728e9fb48\"\nvolume-2639                          3m27s       Normal    Pulled                               pod/hostpath-injector                                                            Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-2639                          3m27s       Normal    Created                              pod/hostpath-injector                                                            Created container hostpath-injector\nvolume-2639                          3m22s       Normal    Started                              pod/hostpath-injector                                                            Started container hostpath-injector\nvolume-2639                          3m2s        Normal    Killing                              pod/hostpath-injector                                                            Stopping container hostpath-injector\nvolume-2777                          13s         Normal    Scheduled                            pod/gcepd-client                                                                 Successfully assigned volume-2777/gcepd-client to bootstrap-e2e-minion-group-mnwl\nvolume-2777                          8s          Normal    SuccessfulAttachVolume               pod/gcepd-client                                                                 AttachVolume.Attach succeeded for volume \"gcepd-volume-0\"\nvolume-2777                          79s         Normal    Scheduled                            pod/gcepd-injector                                                               Successfully assigned volume-2777/gcepd-injector to bootstrap-e2e-minion-group-n0jl\nvolume-2777                          75s         Normal    SuccessfulAttachVolume               pod/gcepd-injector                                                               AttachVolume.Attach succeeded for volume \"gcepd-volume-0\"\nvolume-2777                          57s         Normal    Pulled                               pod/gcepd-injector                                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-2777                          56s         Normal    Created                              pod/gcepd-injector                                                               Created container gcepd-injector\nvolume-2777                          52s         Normal    Started                              pod/gcepd-injector                                                               Started container gcepd-injector\nvolume-2777                          34s         Normal    Killing                              pod/gcepd-injector                                                               Stopping container gcepd-injector\nvolume-2840                          4m59s       Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-n0jl-fx677                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-2840                          4m59s       Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-fx677                               Created container agnhost\nvolume-2840                          4m58s       Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-fx677                               Started container agnhost\nvolume-2840                          2m37s       Normal    Killing                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-fx677                               Stopping container agnhost\nvolume-2840                          3m7s        Normal    Pulled                               pod/local-client                                                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-2840                          3m7s        Normal    Created                              pod/local-client                                                                 Created container local-client\nvolume-2840                          3m6s        Normal    Started                              pod/local-client                                                                 Started container local-client\nvolume-2840                          2m51s       Normal    Killing                              pod/local-client                                                                 Stopping container local-client\nvolume-2840                          4m27s       Normal    Pulled                               pod/local-injector                                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-2840                          4m27s       Normal    Created                              pod/local-injector                                                               Created container local-injector\nvolume-2840                          4m25s       Normal    Started                              pod/local-injector                                                               Started container local-injector\nvolume-2840                          3m37s       Normal    Killing                              pod/local-injector                                                               Stopping container local-injector\nvolume-2840                          4m40s       Warning   ProvisioningFailed                   persistentvolumeclaim/pvc-w9wn8                                                  storageclass.storage.k8s.io \"volume-2840\" not found\nvolume-3461                          4m33s       Normal    Scheduled                            pod/gluster-client                                                               Successfully assigned volume-3461/gluster-client to bootstrap-e2e-minion-group-mnwl\nvolume-3461                          4m32s       Warning   FailedMount                          pod/gluster-client                                                               Unable to attach or mount volumes: unmounted volumes=[gluster-volume-0], unattached volumes=[gluster-volume-0 default-token-c9wxr]: error processing PVC volume-3461/pvc-xzg2l: failed to fetch PVC from API server: persistentvolumeclaims \"pvc-xzg2l\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-mnwl\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"volume-3461\": no relationship found between node \"bootstrap-e2e-minion-group-mnwl\" and this object\nvolume-3461                          4m19s       Normal    Pulled                               pod/gluster-client                                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-3461                          4m19s       Normal    Created                              pod/gluster-client                                                               Created container gluster-client\nvolume-3461                          4m18s       Normal    Started                              pod/gluster-client                                                               Started container gluster-client\nvolume-3461                          4m7s        Normal    Killing                              pod/gluster-client                                                               Stopping container gluster-client\nvolume-3461                          5m14s       Normal    Scheduled                            pod/gluster-injector                                                             Successfully assigned volume-3461/gluster-injector to bootstrap-e2e-minion-group-mnwl\nvolume-3461                          5m9s        Normal    Pulled                               pod/gluster-injector                                                             Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-3461                          5m9s        Normal    Created                              pod/gluster-injector                                                             Created container gluster-injector\nvolume-3461                          5m7s        Normal    Started                              pod/gluster-injector                                                             Started container gluster-injector\nvolume-3461                          4m58s       Normal    Killing                              pod/gluster-injector                                                             Stopping container gluster-injector\nvolume-3461                          5m25s       Normal    Scheduled                            pod/gluster-server                                                               Successfully assigned volume-3461/gluster-server to bootstrap-e2e-minion-group-mnwl\nvolume-3461                          5m23s       Normal    Pulled                               pod/gluster-server                                                               Container image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\" already present on machine\nvolume-3461                          5m23s       Normal    Created                              pod/gluster-server                                                               Created container gluster-server\nvolume-3461                          5m22s       Normal    Started                              pod/gluster-server                                                               Started container gluster-server\nvolume-3461                          3m58s       Normal    Killing                              pod/gluster-server                                                               Stopping container gluster-server\nvolume-3704                          105s        Normal    Scheduled                            pod/exec-volume-test-inlinevolume-r9gx                                           Successfully assigned volume-3704/exec-volume-test-inlinevolume-r9gx to bootstrap-e2e-minion-group-n0jl\nvolume-3704                          102s        Normal    Pulled                               pod/exec-volume-test-inlinevolume-r9gx                                           Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-3704                          102s        Normal    Created                              pod/exec-volume-test-inlinevolume-r9gx                                           Created container exec-container-inlinevolume-r9gx\nvolume-3704                          101s        Normal    Started                              pod/exec-volume-test-inlinevolume-r9gx                                           Started container exec-container-inlinevolume-r9gx\nvolume-3991                          2m13s       Normal    LeaderElection                       endpoints/example.com-nfs-volume-3991                                            external-provisioner-9jbwb_162d449a-4b29-4051-8576-eb9e0698ebe9 became leader\nvolume-3991                          2m2s        Normal    Scheduled                            pod/exec-volume-test-dynamicpv-4b5f                                              Successfully assigned volume-3991/exec-volume-test-dynamicpv-4b5f to bootstrap-e2e-minion-group-n0jl\nvolume-3991                          2m          Normal    Pulled                               pod/exec-volume-test-dynamicpv-4b5f                                              Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-3991                          2m          Normal    Created                              pod/exec-volume-test-dynamicpv-4b5f                                              Created container exec-container-dynamicpv-4b5f\nvolume-3991                          118s        Normal    Started                              pod/exec-volume-test-dynamicpv-4b5f                                              Started container exec-container-dynamicpv-4b5f\nvolume-3991                          2m23s       Normal    Scheduled                            pod/external-provisioner-9jbwb                                                   Successfully assigned volume-3991/external-provisioner-9jbwb to bootstrap-e2e-minion-group-mnwl\nvolume-3991                          2m22s       Warning   FailedMount                          pod/external-provisioner-9jbwb                                                   MountVolume.SetUp failed for volume \"default-token-z2r6g\" : failed to sync secret cache: timed out waiting for the condition\nvolume-3991                          2m21s       Normal    Pulled                               pod/external-provisioner-9jbwb                                                   Container image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\" already present on machine\nvolume-3991                          2m21s       Normal    Created                              pod/external-provisioner-9jbwb                                                   Created container nfs-provisioner\nvolume-3991                          2m20s       Normal    Started                              pod/external-provisioner-9jbwb                                                   Started container nfs-provisioner\nvolume-3991                          100s        Normal    Killing                              pod/external-provisioner-9jbwb                                                   Stopping container nfs-provisioner\nvolume-3991                          2m6s        Normal    ExternalProvisioning                 persistentvolumeclaim/nfst7krc                                                   waiting for a volume to be created, either by external provisioner \"example.com/nfs-volume-3991\" or manually created by system administrator\nvolume-3991                          2m5s        Normal    Provisioning                         persistentvolumeclaim/nfst7krc                                                   External provisioner is provisioning volume for claim \"volume-3991/nfst7krc\"\nvolume-3991                          2m5s        Normal    ProvisioningSucceeded                persistentvolumeclaim/nfst7krc                                                   Successfully provisioned volume pvc-adc9dcf3-5d3a-4ef9-b26d-5e9f89b607a9\nvolume-3995                          2m6s        Normal    Scheduled                            pod/exec-volume-test-dynamicpv-czkl                                              Successfully assigned volume-3995/exec-volume-test-dynamicpv-czkl to bootstrap-e2e-minion-group-n0jl\nvolume-3995                          2m1s        Normal    SuccessfulAttachVolume               pod/exec-volume-test-dynamicpv-czkl                                              AttachVolume.Attach succeeded for volume \"pvc-64d10ac6-746c-481f-934d-8ec22c7f2716\"\nvolume-3995                          111s        Normal    Pulled                               pod/exec-volume-test-dynamicpv-czkl                                              Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-3995                          111s        Normal    Created                              pod/exec-volume-test-dynamicpv-czkl                                              Created container exec-container-dynamicpv-czkl\nvolume-3995                          109s        Normal    Started                              pod/exec-volume-test-dynamicpv-czkl                                              Started container exec-container-dynamicpv-czkl\nvolume-3995                          2m13s       Normal    WaitForFirstConsumer                 persistentvolumeclaim/gcepdgxn5t                                                 waiting for first consumer to be created before binding\nvolume-3995                          2m9s        Normal    ProvisioningSucceeded                persistentvolumeclaim/gcepdgxn5t                                                 Successfully provisioned volume pvc-64d10ac6-746c-481f-934d-8ec22c7f2716 using kubernetes.io/gce-pd\nvolume-4490                          4m46s       Normal    Scheduled                            pod/exec-volume-test-preprovisionedpv-k98p                                       Successfully assigned volume-4490/exec-volume-test-preprovisionedpv-k98p to bootstrap-e2e-minion-group-5wcz\nvolume-4490                          4m42s       Normal    SuccessfulAttachVolume               pod/exec-volume-test-preprovisionedpv-k98p                                       AttachVolume.Attach succeeded for volume \"gcepd-5nfll\"\nvolume-4490                          4m31s       Normal    Pulled                               pod/exec-volume-test-preprovisionedpv-k98p                                       Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-4490                          4m31s       Normal    Created                              pod/exec-volume-test-preprovisionedpv-k98p                                       Created container exec-container-preprovisionedpv-k98p\nvolume-4490                          4m30s       Normal    Started                              pod/exec-volume-test-preprovisionedpv-k98p                                       Started container exec-container-preprovisionedpv-k98p\nvolume-4490                          5m2s        Warning   ProvisioningFailed                   persistentvolumeclaim/pvc-6vhsx                                                  storageclass.storage.k8s.io \"volume-4490\" not found\nvolume-5853                          62s         Normal    Scheduled                            pod/gcepd-injector                                                               Successfully assigned volume-5853/gcepd-injector to bootstrap-e2e-minion-group-9dh8\nvolume-5853                          56s         Normal    SuccessfulAttachVolume               pod/gcepd-injector                                                               AttachVolume.Attach succeeded for volume \"gcepd-zgv5v\"\nvolume-5853                          42s         Normal    Pulled                               pod/gcepd-injector                                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-5853                          41s         Normal    Created                              pod/gcepd-injector                                                               Created container gcepd-injector\nvolume-5853                          37s         Normal    Started                              pod/gcepd-injector                                                               Started container gcepd-injector\nvolume-5853                          16s         Normal    Killing                              pod/gcepd-injector                                                               Stopping container gcepd-injector\nvolume-5853                          79s         Warning   ProvisioningFailed                   persistentvolumeclaim/pvc-xvz2m                                                  storageclass.storage.k8s.io \"volume-5853\" not found\nvolume-6127                          79s         Normal    Scheduled                            pod/exec-volume-test-inlinevolume-g82l                                           Successfully assigned volume-6127/exec-volume-test-inlinevolume-g82l to bootstrap-e2e-minion-group-9dh8\nvolume-6127                          75s         Normal    SuccessfulAttachVolume               pod/exec-volume-test-inlinevolume-g82l                                           AttachVolume.Attach succeeded for volume \"vol1\"\nvolume-6127                          63s         Normal    Pulled                               pod/exec-volume-test-inlinevolume-g82l                                           Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-6127                          63s         Normal    Created                              pod/exec-volume-test-inlinevolume-g82l                                           Created container exec-container-inlinevolume-g82l\nvolume-6127                          61s         Normal    Started                              pod/exec-volume-test-inlinevolume-g82l                                           Started container exec-container-inlinevolume-g82l\nvolume-6940                          89s         Normal    LeaderElection                       endpoints/example.com-nfs-volume-6940                                            external-provisioner-ht844_caf631d4-8ef8-4899-ac1a-29066aacbc91 became leader\nvolume-6940                          71s         Normal    Scheduled                            pod/exec-volume-test-inlinevolume-gk2b                                           Successfully assigned volume-6940/exec-volume-test-inlinevolume-gk2b to bootstrap-e2e-minion-group-mnwl\nvolume-6940                          70s         Normal    Pulled                               pod/exec-volume-test-inlinevolume-gk2b                                           Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-6940                          70s         Normal    Created                              pod/exec-volume-test-inlinevolume-gk2b                                           Created container exec-container-inlinevolume-gk2b\nvolume-6940                          69s         Normal    Started                              pod/exec-volume-test-inlinevolume-gk2b                                           Started container exec-container-inlinevolume-gk2b\nvolume-6940                          99s         Normal    Scheduled                            pod/external-provisioner-ht844                                                   Successfully assigned volume-6940/external-provisioner-ht844 to bootstrap-e2e-minion-group-mnwl\nvolume-6940                          96s         Normal    Pulled                               pod/external-provisioner-ht844                                                   Container image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\" already present on machine\nvolume-6940                          96s         Normal    Created                              pod/external-provisioner-ht844                                                   Created container nfs-provisioner\nvolume-6940                          96s         Normal    Started                              pod/external-provisioner-ht844                                                   Started container nfs-provisioner\nvolume-6940                          31s         Normal    Killing                              pod/external-provisioner-ht844                                                   Stopping container nfs-provisioner\nvolume-6940                          87s         Normal    Scheduled                            pod/nfs-server                                                                   Successfully assigned volume-6940/nfs-server to bootstrap-e2e-minion-group-n0jl\nvolume-6940                          82s         Normal    Pulled                               pod/nfs-server                                                                   Container image \"gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0\" already present on machine\nvolume-6940                          81s         Normal    Created                              pod/nfs-server                                                                   Created container nfs-server\nvolume-6940                          80s         Normal    Started                              pod/nfs-server                                                                   Started container nfs-server\nvolume-6940                          62s         Normal    Killing                              pod/nfs-server                                                                   Stopping container nfs-server\nvolume-7317                          77s         Warning   FailedMount                          pod/exec-volume-test-preprovisionedpv-klhc                                       Unable to attach or mount volumes: unmounted volumes=[vol1 default-token-ck8qc], unattached volumes=[vol1 default-token-ck8qc]: error processing PVC volume-7317/pvc-2klrt: failed to fetch PVC from API server: persistentvolumeclaims \"pvc-2klrt\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-n0jl\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"volume-7317\": no relationship found between node \"bootstrap-e2e-minion-group-n0jl\" and this object\nvolume-7317                          58s         Normal    Pulled                               pod/exec-volume-test-preprovisionedpv-klhc                                       Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-7317                          57s         Normal    Created                              pod/exec-volume-test-preprovisionedpv-klhc                                       Created container exec-container-preprovisionedpv-klhc\nvolume-7317                          53s         Normal    Started                              pod/exec-volume-test-preprovisionedpv-klhc                                       Started container exec-container-preprovisionedpv-klhc\nvolume-7317                          93s         Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-n0jl-v8zl7                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-7317                          93s         Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-v8zl7                               Created container agnhost\nvolume-7317                          92s         Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-v8zl7                               Started container agnhost\nvolume-7317                          27s         Normal    Killing                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-v8zl7                               Stopping container agnhost\nvolume-7317                          83s         Warning   ProvisioningFailed                   persistentvolumeclaim/pvc-2klrt                                                  storageclass.storage.k8s.io \"volume-7317\" not found\nvolume-746                           3m25s       Normal    Scheduled                            pod/emptydir-injector                                                            Successfully assigned volume-746/emptydir-injector to bootstrap-e2e-minion-group-5wcz\nvolume-746                           3m23s       Normal    Pulled                               pod/emptydir-injector                                                            Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-746                           3m22s       Normal    Created                              pod/emptydir-injector                                                            Created container emptydir-injector\nvolume-746                           3m20s       Normal    Started                              pod/emptydir-injector                                                            Started container emptydir-injector\nvolume-746                           3m6s        Normal    Killing                              pod/emptydir-injector                                                            Stopping container emptydir-injector\nvolume-8392                          2m55s       Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-mnwl-pxzkq                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-8392                          2m55s       Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-mnwl-pxzkq                               Created container agnhost\nvolume-8392                          2m54s       Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-mnwl-pxzkq                               Started container agnhost\nvolume-8392                          93s         Normal    Killing                              pod/hostexec-bootstrap-e2e-minion-group-mnwl-pxzkq                               Stopping container agnhost\nvolume-8392                          115s        Normal    Pulled                               pod/local-client                                                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-8392                          115s        Normal    Created                              pod/local-client                                                                 Created container local-client\nvolume-8392                          114s        Normal    Started                              pod/local-client                                                                 Started container local-client\nvolume-8392                          103s        Normal    Killing                              pod/local-client                                                                 Stopping container local-client\nvolume-8392                          2m28s       Normal    Pulled                               pod/local-injector                                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-8392                          2m28s       Normal    Created                              pod/local-injector                                                               Created container local-injector\nvolume-8392                          2m27s       Normal    Started                              pod/local-injector                                                               Started container local-injector\nvolume-8392                          2m9s        Normal    Killing                              pod/local-injector                                                               Stopping container local-injector\nvolume-8392                          2m49s       Warning   ProvisioningFailed                   persistentvolumeclaim/pvc-rs45x                                                  storageclass.storage.k8s.io \"volume-8392\" not found\nvolume-8620                          2m36s       Normal    Pulled                               pod/exec-volume-test-preprovisionedpv-pxfz                                       Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nvolume-8620                          2m36s       Normal    Created                              pod/exec-volume-test-preprovisionedpv-pxfz                                       Created container exec-container-preprovisionedpv-pxfz\nvolume-8620                          2m36s       Normal    Started                              pod/exec-volume-test-preprovisionedpv-pxfz                                       Started container exec-container-preprovisionedpv-pxfz\nvolume-8620                          2m49s       Warning   FailedMount                          pod/hostexec-bootstrap-e2e-minion-group-5wcz-w6dbg                               MountVolume.SetUp failed for volume \"default-token-49ktg\" : failed to sync secret cache: timed out waiting for the condition\nvolume-8620                          2m48s       Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-5wcz-w6dbg                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-8620                          2m47s       Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-w6dbg                               Created container agnhost\nvolume-8620                          2m47s       Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-w6dbg                               Started container agnhost\nvolume-8620                          2m25s       Normal    Killing                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-w6dbg                               Stopping container agnhost\nvolume-9340                          3m35s       Normal    Scheduled                            pod/gcepd-client                                                                 Successfully assigned volume-9340/gcepd-client to bootstrap-e2e-minion-group-mnwl\nvolume-9340                          3m27s       Normal    SuccessfulAttachVolume               pod/gcepd-client                                                                 AttachVolume.Attach succeeded for volume \"pvc-51ba9a7b-d48f-42a1-9f0b-a2a3f9ec05ee\"\nvolume-9340                          3m22s       Normal    Pulled                               pod/gcepd-client                                                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-9340                          3m22s       Normal    Created                              pod/gcepd-client                                                                 Created container gcepd-client\nvolume-9340                          3m22s       Normal    Started                              pod/gcepd-client                                                                 Started container gcepd-client\nvolume-9340                          3m9s        Normal    Killing                              pod/gcepd-client                                                                 Stopping container gcepd-client\nvolume-9340                          4m25s       Normal    Scheduled                            pod/gcepd-injector                                                               Successfully assigned volume-9340/gcepd-injector to bootstrap-e2e-minion-group-5wcz\nvolume-9340                          4m21s       Normal    SuccessfulAttachVolume               pod/gcepd-injector                                                               AttachVolume.Attach succeeded for volume \"pvc-51ba9a7b-d48f-42a1-9f0b-a2a3f9ec05ee\"\nvolume-9340                          4m8s        Normal    Pulled                               pod/gcepd-injector                                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-9340                          4m8s        Normal    Created                              pod/gcepd-injector                                                               Created container gcepd-injector\nvolume-9340                          4m5s        Normal    Started                              pod/gcepd-injector                                                               Started container gcepd-injector\nvolume-9340                          3m45s       Normal    Killing                              pod/gcepd-injector                                                               Stopping container gcepd-injector\nvolume-9340                          4m30s       Normal    WaitForFirstConsumer                 persistentvolumeclaim/gcepd48fl8                                                 waiting for first consumer to be created before binding\nvolume-9340                          4m27s       Normal    ProvisioningSucceeded                persistentvolumeclaim/gcepd48fl8                                                 Successfully provisioned volume pvc-51ba9a7b-d48f-42a1-9f0b-a2a3f9ec05ee using kubernetes.io/gce-pd\nvolume-9781                          2m37s       Normal    Scheduled                            pod/gluster-client                                                               Successfully assigned volume-9781/gluster-client to bootstrap-e2e-minion-group-9dh8\nvolume-9781                          2m32s       Normal    Pulled                               pod/gluster-client                                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-9781                          2m32s       Normal    Created                              pod/gluster-client                                                               Created container gluster-client\nvolume-9781                          2m31s       Normal    Started                              pod/gluster-client                                                               Started container gluster-client\nvolume-9781                          2m19s       Normal    Killing                              pod/gluster-client                                                               Stopping container gluster-client\nvolume-9781                          3m18s       Normal    Scheduled                            pod/gluster-injector                                                             Successfully assigned volume-9781/gluster-injector to bootstrap-e2e-minion-group-9dh8\nvolume-9781                          3m17s       Warning   FailedMount                          pod/gluster-injector                                                             MountVolume.SetUp failed for volume \"default-token-8h75n\" : failed to sync secret cache: timed out waiting for the condition\nvolume-9781                          3m16s       Normal    Pulled                               pod/gluster-injector                                                             Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-9781                          3m16s       Normal    Created                              pod/gluster-injector                                                             Created container gluster-injector\nvolume-9781                          3m16s       Normal    Started                              pod/gluster-injector                                                             Started container gluster-injector\nvolume-9781                          2m48s       Normal    Killing                              pod/gluster-injector                                                             Stopping container gluster-injector\nvolume-9781                          3m25s       Normal    Scheduled                            pod/gluster-server                                                               Successfully assigned volume-9781/gluster-server to bootstrap-e2e-minion-group-5wcz\nvolume-9781                          3m23s       Normal    Pulled                               pod/gluster-server                                                               Container image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\" already present on machine\nvolume-9781                          3m23s       Normal    Created                              pod/gluster-server                                                               Created container gluster-server\nvolume-9781                          3m21s       Normal    Started                              pod/gluster-server                                                               Started container gluster-server\nvolume-9781                          2m4s        Normal    Killing                              pod/gluster-server                                                               Stopping container gluster-server\nvolume-expand-2836                   30s         Normal    WaitForFirstConsumer                 persistentvolumeclaim/gcepd5m7fv                                                 waiting for first consumer to be created before binding\nvolume-expand-2836                   26s         Normal    ProvisioningSucceeded                persistentvolumeclaim/gcepd5m7fv                                                 Successfully provisioned volume pvc-89343211-bcb2-4e18-951f-1633dceab414 using kubernetes.io/gce-pd\nvolume-expand-2836                   25s         Normal    Scheduled                            pod/security-context-6e6c869b-d4d9-4aed-b222-b8e181d64a52                        Successfully assigned volume-expand-2836/security-context-6e6c869b-d4d9-4aed-b222-b8e181d64a52 to bootstrap-e2e-minion-group-mnwl\nvolume-expand-2836                   21s         Normal    SuccessfulAttachVolume               pod/security-context-6e6c869b-d4d9-4aed-b222-b8e181d64a52                        AttachVolume.Attach succeeded for volume \"pvc-89343211-bcb2-4e18-951f-1633dceab414\"\nvolume-expand-2836                   7s          Normal    SuccessfulMountVolume                pod/security-context-6e6c869b-d4d9-4aed-b222-b8e181d64a52                        MapVolume.MapPodDevice succeeded for volume \"pvc-89343211-bcb2-4e18-951f-1633dceab414\" globalMapPath \"/var/lib/kubelet/plugins/kubernetes.io/gce-pd/volumeDevices/bootstrap-e2e-dynamic-pvc-89343211-bcb2-4e18-951f-1633dceab414\"\nvolume-expand-2836                   7s          Normal    SuccessfulMountVolume                pod/security-context-6e6c869b-d4d9-4aed-b222-b8e181d64a52                        MapVolume.MapPodDevice succeeded for volume \"pvc-89343211-bcb2-4e18-951f-1633dceab414\" volumeMapPath \"/var/lib/kubelet/pods/08615033-34f0-4051-a2cc-15df6d21f9cb/volumeDevices/kubernetes.io~gce-pd\"\nvolumemode-1240                      5m          Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-n0jl-qbjv4                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-1240                      4m59s       Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-qbjv4                               Created container agnhost\nvolumemode-1240                      4m57s       Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-qbjv4                               Started container agnhost\nvolumemode-1240                      4m38s       Normal    Killing                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-qbjv4                               Stopping container agnhost\nvolumemode-1240                      5m38s       Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-n0jl-qwhwd                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-1240                      5m38s       Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-qwhwd                               Created container agnhost\nvolumemode-1240                      5m37s       Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-qwhwd                               Started container agnhost\nvolumemode-1240                      4m24s       Normal    Killing                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-qwhwd                               Stopping container agnhost\nvolumemode-1240                      5m33s       Warning   ProvisioningFailed                   persistentvolumeclaim/pvc-x2zrd                                                  storageclass.storage.k8s.io \"volumemode-1240\" not found\nvolumemode-1240                      5m16s       Normal    Scheduled                            pod/security-context-feeefca0-b9b6-4955-8170-779b3ac21869                        Successfully assigned volumemode-1240/security-context-feeefca0-b9b6-4955-8170-779b3ac21869 to bootstrap-e2e-minion-group-n0jl\nvolumemode-1240                      5m13s       Normal    Pulled                               pod/security-context-feeefca0-b9b6-4955-8170-779b3ac21869                        Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolumemode-1240                      5m13s       Normal    Created                              pod/security-context-feeefca0-b9b6-4955-8170-779b3ac21869                        Created container write-pod\nvolumemode-1240                      5m12s       Normal    Started                              pod/security-context-feeefca0-b9b6-4955-8170-779b3ac21869                        Started container write-pod\nvolumemode-1240                      4m36s       Normal    Killing                              pod/security-context-feeefca0-b9b6-4955-8170-779b3ac21869                        Stopping container write-pod\nvolumemode-4012                      49s         Normal    Pulled                               pod/csi-hostpath-attacher-0                                                      Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nvolumemode-4012                      49s         Normal    Created                              pod/csi-hostpath-attacher-0                                                      Created container csi-attacher\nvolumemode-4012                      45s         Normal    Started                              pod/csi-hostpath-attacher-0                                                      Started container csi-attacher\nvolumemode-4012                      60s         Warning   FailedCreate                         statefulset/csi-hostpath-attacher                                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nvolumemode-4012                      57s         Normal    SuccessfulCreate                     statefulset/csi-hostpath-attacher                                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nvolumemode-4012                      47s         Normal    Pulled                               pod/csi-hostpath-provisioner-0                                                   Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nvolumemode-4012                      47s         Normal    Created                              pod/csi-hostpath-provisioner-0                                                   Created container csi-provisioner\nvolumemode-4012                      44s         Normal    Started                              pod/csi-hostpath-provisioner-0                                                   Started container csi-provisioner\nvolumemode-4012                      60s         Warning   FailedCreate                         statefulset/csi-hostpath-provisioner                                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nvolumemode-4012                      58s         Normal    SuccessfulCreate                     statefulset/csi-hostpath-provisioner                                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nvolumemode-4012                      53s         Normal    Pulled                               pod/csi-hostpath-resizer-0                                                       Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nvolumemode-4012                      53s         Normal    Created                              pod/csi-hostpath-resizer-0                                                       Created container csi-resizer\nvolumemode-4012                      50s         Normal    Started                              pod/csi-hostpath-resizer-0                                                       Started container csi-resizer\nvolumemode-4012                      60s         Warning   FailedCreate                         statefulset/csi-hostpath-resizer                                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nvolumemode-4012                      59s         Normal    SuccessfulCreate                     statefulset/csi-hostpath-resizer                                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nvolumemode-4012                      50s         Normal    ExternalProvisioning                 persistentvolumeclaim/csi-hostpath95hqq                                          waiting for a volume to be created, either by external provisioner \"csi-hostpath-volumemode-4012\" or manually created by system administrator\nvolumemode-4012                      44s         Normal    Provisioning                         persistentvolumeclaim/csi-hostpath95hqq                                          External provisioner is provisioning volume for claim \"volumemode-4012/csi-hostpath95hqq\"\nvolumemode-4012                      44s         Normal    ProvisioningSucceeded                persistentvolumeclaim/csi-hostpath95hqq                                          Successfully provisioned volume pvc-236862ba-78d3-496f-a35e-8fd19246456e\nvolumemode-4012                      58s         Normal    Pulled                               pod/csi-hostpathplugin-0                                                         Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nvolumemode-4012                      58s         Normal    Created                              pod/csi-hostpathplugin-0                                                         Created container node-driver-registrar\nvolumemode-4012                      54s         Normal    Started                              pod/csi-hostpathplugin-0                                                         Started container node-driver-registrar\nvolumemode-4012                      54s         Normal    Pulled                               pod/csi-hostpathplugin-0                                                         Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nvolumemode-4012                      53s         Normal    Created                              pod/csi-hostpathplugin-0                                                         Created container hostpath\nvolumemode-4012                      50s         Normal    Started                              pod/csi-hostpathplugin-0                                                         Started container hostpath\nvolumemode-4012                      50s         Normal    Pulled                               pod/csi-hostpathplugin-0                                                         Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nvolumemode-4012                      50s         Normal    Created                              pod/csi-hostpathplugin-0                                                         Created container liveness-probe\nvolumemode-4012                      46s         Normal    Started                              pod/csi-hostpathplugin-0                                                         Started container liveness-probe\nvolumemode-4012                      63s         Normal    SuccessfulCreate                     statefulset/csi-hostpathplugin                                                   create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nvolumemode-4012                      58s         Warning   FailedMount                          pod/csi-snapshotter-0                                                            MountVolume.SetUp failed for volume \"csi-snapshotter-token-fjvn4\" : failed to sync secret cache: timed out waiting for the condition\nvolumemode-4012                      50s         Normal    Pulled                               pod/csi-snapshotter-0                                                            Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nvolumemode-4012                      50s         Normal    Created                              pod/csi-snapshotter-0                                                            Created container csi-snapshotter\nvolumemode-4012                      46s         Normal    Started                              pod/csi-snapshotter-0                                                            Started container csi-snapshotter\nvolumemode-4012                      60s         Warning   FailedCreate                         statefulset/csi-snapshotter                                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nvolumemode-4012                      59s         Normal    SuccessfulCreate                     statefulset/csi-snapshotter                                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nvolumemode-4012                      28s         Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-n0jl-h8swb                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-4012                      28s         Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-h8swb                               Created container agnhost\nvolumemode-4012                      27s         Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-h8swb                               Started container agnhost\nvolumemode-4012                      16s         Normal    Killing                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-h8swb                               Stopping container agnhost\nvolumemode-4012                      42s         Normal    Scheduled                            pod/security-context-cb392de5-ec21-4cb1-a405-ab29bede44a3                        Successfully assigned volumemode-4012/security-context-cb392de5-ec21-4cb1-a405-ab29bede44a3 to bootstrap-e2e-minion-group-n0jl\nvolumemode-4012                      41s         Normal    SuccessfulAttachVolume               pod/security-context-cb392de5-ec21-4cb1-a405-ab29bede44a3                        AttachVolume.Attach succeeded for volume \"pvc-236862ba-78d3-496f-a35e-8fd19246456e\"\nvolumemode-4012                      38s         Normal    Pulled                               pod/security-context-cb392de5-ec21-4cb1-a405-ab29bede44a3                        Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolumemode-4012                      38s         Normal    Created                              pod/security-context-cb392de5-ec21-4cb1-a405-ab29bede44a3                        Created container write-pod\nvolumemode-4012                      36s         Normal    Started                              pod/security-context-cb392de5-ec21-4cb1-a405-ab29bede44a3                        Started container write-pod\nvolumemode-4012                      16s         Normal    Killing                              pod/security-context-cb392de5-ec21-4cb1-a405-ab29bede44a3                        Stopping container write-pod\nvolumemode-443                       3m57s       Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-n0jl-smthv                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-443                       3m57s       Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-smthv                               Created container agnhost\nvolumemode-443                       3m52s       Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-smthv                               Started container agnhost\nvolumemode-443                       2m34s       Normal    Killing                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-smthv                               Stopping container agnhost\nvolumemode-443                       3m3s        Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-n0jl-trflc                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-443                       3m3s        Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-trflc                               Created container agnhost\nvolumemode-443                       3m3s        Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-trflc                               Started container agnhost\nvolumemode-443                       2m49s       Normal    Killing                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-trflc                               Stopping container agnhost\nvolumemode-443                       3m25s       Warning   ProvisioningFailed                   persistentvolumeclaim/pvc-s9zdl                                                  storageclass.storage.k8s.io \"volumemode-443\" not found\nvolumemode-443                       3m14s       Normal    Scheduled                            pod/security-context-c02f2498-2672-4af7-870b-abd489a2fa5f                        Successfully assigned volumemode-443/security-context-c02f2498-2672-4af7-870b-abd489a2fa5f to bootstrap-e2e-minion-group-n0jl\nvolumemode-443                       3m10s       Normal    Pulled                               pod/security-context-c02f2498-2672-4af7-870b-abd489a2fa5f                        Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolumemode-443                       3m10s       Normal    Created                              pod/security-context-c02f2498-2672-4af7-870b-abd489a2fa5f                        Created container write-pod\nvolumemode-443                       3m9s        Normal    Started                              pod/security-context-c02f2498-2672-4af7-870b-abd489a2fa5f                        Started container write-pod\nvolumemode-443                       2m48s       Normal    Killing                              pod/security-context-c02f2498-2672-4af7-870b-abd489a2fa5f                        Stopping container write-pod\nvolumemode-7712                      2m38s       Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-n0jl-cqk4s                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-7712                      2m37s       Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-cqk4s                               Created container agnhost\nvolumemode-7712                      2m36s       Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-cqk4s                               Started container agnhost\nvolumemode-7712                      2m25s       Normal    Killing                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-cqk4s                               Stopping container agnhost\nvolumemode-7712                      3m11s       Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-n0jl-v6658                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-7712                      3m11s       Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-v6658                               Created container agnhost\nvolumemode-7712                      3m10s       Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-n0jl-v6658                               Started container agnhost\nvolumemode-7712                      3m5s        Warning   ProvisioningFailed                   persistentvolumeclaim/pvc-dc59c                                                  storageclass.storage.k8s.io \"volumemode-7712\" not found\nvolumemode-7712                      2m46s       Normal    Scheduled                            pod/security-context-e38ceac1-9f2a-4bf1-b6e9-352bc915af45                        Successfully assigned volumemode-7712/security-context-e38ceac1-9f2a-4bf1-b6e9-352bc915af45 to bootstrap-e2e-minion-group-n0jl\nvolumemode-7712                      2m45s       Normal    Pulled                               pod/security-context-e38ceac1-9f2a-4bf1-b6e9-352bc915af45                        Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolumemode-7712                      2m44s       Normal    Created                              pod/security-context-e38ceac1-9f2a-4bf1-b6e9-352bc915af45                        Created container write-pod\nvolumemode-7712                      2m44s       Normal    Started                              pod/security-context-e38ceac1-9f2a-4bf1-b6e9-352bc915af45                        Started container write-pod\nvolumemode-7712                      2m23s       Normal    Killing                              pod/security-context-e38ceac1-9f2a-4bf1-b6e9-352bc915af45                        Stopping container write-pod\nvolumemode-807                       108s        Normal    Pulled                               pod/csi-hostpath-attacher-0                                                      Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nvolumemode-807                       108s        Normal    Created                              pod/csi-hostpath-attacher-0                                                      Created container csi-attacher\nvolumemode-807                       107s        Normal    Started                              pod/csi-hostpath-attacher-0                                                      Started container csi-attacher\nvolumemode-807                       55s         Normal    Killing                              pod/csi-hostpath-attacher-0                                                      Stopping container csi-attacher\nvolumemode-807                       117s        Warning   FailedCreate                         statefulset/csi-hostpath-attacher                                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nvolumemode-807                       114s        Normal    SuccessfulCreate                     statefulset/csi-hostpath-attacher                                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nvolumemode-807                       114s        Warning   FailedMount                          pod/csi-hostpath-provisioner-0                                                   MountVolume.SetUp failed for volume \"csi-provisioner-token-2sjbs\" : failed to sync secret cache: timed out waiting for the condition\nvolumemode-807                       110s        Normal    Pulled                               pod/csi-hostpath-provisioner-0                                                   Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nvolumemode-807                       109s        Normal    Created                              pod/csi-hostpath-provisioner-0                                                   Created container csi-provisioner\nvolumemode-807                       108s        Normal    Started                              pod/csi-hostpath-provisioner-0                                                   Started container csi-provisioner\nvolumemode-807                       53s         Normal    Killing                              pod/csi-hostpath-provisioner-0                                                   Stopping container csi-provisioner\nvolumemode-807                       117s        Warning   FailedCreate                         statefulset/csi-hostpath-provisioner                                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nvolumemode-807                       115s        Normal    SuccessfulCreate                     statefulset/csi-hostpath-provisioner                                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nvolumemode-807                       115s        Warning   FailedMount                          pod/csi-hostpath-resizer-0                                                       MountVolume.SetUp failed for volume \"csi-resizer-token-cplz6\" : failed to sync secret cache: timed out waiting for the condition\nvolumemode-807                       111s        Normal    Pulled                               pod/csi-hostpath-resizer-0                                                       Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nvolumemode-807                       111s        Normal    Created                              pod/csi-hostpath-resizer-0                                                       Created container csi-resizer\nvolumemode-807                       108s        Normal    Started                              pod/csi-hostpath-resizer-0                                                       Started container csi-resizer\nvolumemode-807                       52s         Normal    Killing                              pod/csi-hostpath-resizer-0                                                       Stopping container csi-resizer\nvolumemode-807                       117s        Warning   FailedCreate                         statefulset/csi-hostpath-resizer                                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nvolumemode-807                       116s        Normal    SuccessfulCreate                     statefulset/csi-hostpath-resizer                                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nvolumemode-807                       110s        Normal    ExternalProvisioning                 persistentvolumeclaim/csi-hostpath2pnht                                          waiting for a volume to be created, either by external provisioner \"csi-hostpath-volumemode-807\" or manually created by system administrator\nvolumemode-807                       107s        Normal    Provisioning                         persistentvolumeclaim/csi-hostpath2pnht                                          External provisioner is provisioning volume for claim \"volumemode-807/csi-hostpath2pnht\"\nvolumemode-807                       107s        Normal    ProvisioningSucceeded                persistentvolumeclaim/csi-hostpath2pnht                                          Successfully provisioned volume pvc-5a442ddd-7cf8-46b6-8ecd-b96f7b613aaf\nvolumemode-807                       118s        Warning   FailedMount                          pod/csi-hostpathplugin-0                                                         MountVolume.SetUp failed for volume \"default-token-pnx5p\" : failed to sync secret cache: timed out waiting for the condition\nvolumemode-807                       116s        Normal    Pulled                               pod/csi-hostpathplugin-0                                                         Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nvolumemode-807                       116s        Normal    Created                              pod/csi-hostpathplugin-0                                                         Created container node-driver-registrar\nvolumemode-807                       116s        Normal    Started                              pod/csi-hostpathplugin-0                                                         Started container node-driver-registrar\nvolumemode-807                       116s        Normal    Pulled                               pod/csi-hostpathplugin-0                                                         Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nvolumemode-807                       115s        Normal    Created                              pod/csi-hostpathplugin-0                                                         Created container hostpath\nvolumemode-807                       115s        Normal    Started                              pod/csi-hostpathplugin-0                                                         Started container hostpath\nvolumemode-807                       115s        Normal    Pulled                               pod/csi-hostpathplugin-0                                                         Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nvolumemode-807                       115s        Normal    Created                              pod/csi-hostpathplugin-0                                                         Created container liveness-probe\nvolumemode-807                       113s        Normal    Started                              pod/csi-hostpathplugin-0                                                         Started container liveness-probe\nvolumemode-807                       54s         Normal    Killing                              pod/csi-hostpathplugin-0                                                         Stopping container node-driver-registrar\nvolumemode-807                       54s         Normal    Killing                              pod/csi-hostpathplugin-0                                                         Stopping container liveness-probe\nvolumemode-807                       54s         Normal    Killing                              pod/csi-hostpathplugin-0                                                         Stopping container hostpath\nvolumemode-807                       53s         Warning   Unhealthy                            pod/csi-hostpathplugin-0                                                         Liveness probe failed: Get http://10.64.1.218:9898/healthz: dial tcp 10.64.1.218:9898: connect: connection refused\nvolumemode-807                       52s         Warning   FailedPreStopHook                    pod/csi-hostpathplugin-0                                                         Exec lifecycle hook ([/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock]) for Container \"node-driver-registrar\" in Pod \"csi-hostpathplugin-0_volumemode-807(d4f304b6-22ce-4e33-9631-e2b9eec828b2)\" failed - error: command '/bin/sh -c rm -rf /registration/csi-hostpath /registration/csi-hostpath-reg.sock' exited with 126: , message: \"OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused \\\"exec: \\\\\\\"/bin/sh\\\\\\\": stat /bin/sh: no such file or directory\\\": unknown\\r\\n\"\nvolumemode-807                       119s        Normal    SuccessfulCreate                     statefulset/csi-hostpathplugin                                                   create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nvolumemode-807                       115s        Warning   FailedMount                          pod/csi-snapshotter-0                                                            MountVolume.SetUp failed for volume \"csi-snapshotter-token-jzxf9\" : failed to sync secret cache: timed out waiting for the condition\nvolumemode-807                       111s        Normal    Pulled                               pod/csi-snapshotter-0                                                            Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nvolumemode-807                       110s        Normal    Created                              pod/csi-snapshotter-0                                                            Created container csi-snapshotter\nvolumemode-807                       108s        Normal    Started                              pod/csi-snapshotter-0                                                            Started container csi-snapshotter\nvolumemode-807                       50s         Normal    Killing                              pod/csi-snapshotter-0                                                            Stopping container csi-snapshotter\nvolumemode-807                       116s        Warning   FailedCreate                         statefulset/csi-snapshotter                                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nvolumemode-807                       116s        Normal    SuccessfulCreate                     statefulset/csi-snapshotter                                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nvolumemode-807                       93s         Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-9dh8-spqjw                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-807                       93s         Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-9dh8-spqjw                               Created container agnhost\nvolumemode-807                       92s         Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-9dh8-spqjw                               Started container agnhost\nvolumemode-807                       83s         Normal    Killing                              pod/hostexec-bootstrap-e2e-minion-group-9dh8-spqjw                               Stopping container agnhost\nvolumemode-807                       105s        Normal    Scheduled                            pod/security-context-84fc2480-ffb8-4680-98c5-f20fb45d5ca0                        Successfully assigned volumemode-807/security-context-84fc2480-ffb8-4680-98c5-f20fb45d5ca0 to bootstrap-e2e-minion-group-9dh8\nvolumemode-807                       104s        Normal    SuccessfulAttachVolume               pod/security-context-84fc2480-ffb8-4680-98c5-f20fb45d5ca0                        AttachVolume.Attach succeeded for volume \"pvc-5a442ddd-7cf8-46b6-8ecd-b96f7b613aaf\"\nvolumemode-807                       99s         Normal    Pulled                               pod/security-context-84fc2480-ffb8-4680-98c5-f20fb45d5ca0                        Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolumemode-807                       99s         Normal    Created                              pod/security-context-84fc2480-ffb8-4680-98c5-f20fb45d5ca0                        Created container write-pod\nvolumemode-807                       98s         Normal    Started                              pod/security-context-84fc2480-ffb8-4680-98c5-f20fb45d5ca0                        Started container write-pod\nvolumemode-807                       83s         Normal    Killing                              pod/security-context-84fc2480-ffb8-4680-98c5-f20fb45d5ca0                        Stopping container write-pod\nwebhook-3219                         4m9s        Normal    Scheduled                            pod/sample-webhook-deployment-5f65f8c764-srlv2                                   Successfully assigned webhook-3219/sample-webhook-deployment-5f65f8c764-srlv2 to bootstrap-e2e-minion-group-5wcz\nwebhook-3219                         4m3s        Normal    Pulled                               pod/sample-webhook-deployment-5f65f8c764-srlv2                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nwebhook-3219                         4m3s        Normal    Created                              pod/sample-webhook-deployment-5f65f8c764-srlv2                                   Created container sample-webhook\nwebhook-3219                         4m2s        Normal    Started                              pod/sample-webhook-deployment-5f65f8c764-srlv2                                   Started container sample-webhook\nwebhook-3219                         4m9s        Normal    SuccessfulCreate                     replicaset/sample-webhook-deployment-5f65f8c764                                  Created pod: sample-webhook-deployment-5f65f8c764-srlv2\nwebhook-3219                         4m9s        Normal    ScalingReplicaSet                    deployment/sample-webhook-deployment                                             Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\nwebhook-3426                         10s         Normal    Scheduled                            pod/sample-webhook-deployment-5f65f8c764-57ttx                                   Successfully assigned webhook-3426/sample-webhook-deployment-5f65f8c764-57ttx to bootstrap-e2e-minion-group-n0jl\nwebhook-3426                         7s          Normal    Pulled                               pod/sample-webhook-deployment-5f65f8c764-57ttx                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nwebhook-3426                         7s          Normal    Created                              pod/sample-webhook-deployment-5f65f8c764-57ttx                                   Created container sample-webhook\nwebhook-3426                         7s          Normal    Started                              pod/sample-webhook-deployment-5f65f8c764-57ttx                                   Started container sample-webhook\nwebhook-3426                         11s         Normal    SuccessfulCreate                     replicaset/sample-webhook-deployment-5f65f8c764                                  Created pod: sample-webhook-deployment-5f65f8c764-57ttx\nwebhook-3426                         11s         Normal    ScalingReplicaSet                    deployment/sample-webhook-deployment                                             Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\nwebhook-3683                         114s        Normal    Scheduled                            pod/sample-webhook-deployment-5f65f8c764-8fcs5                                   Successfully assigned webhook-3683/sample-webhook-deployment-5f65f8c764-8fcs5 to bootstrap-e2e-minion-group-mnwl\nwebhook-3683                         113s        Warning   FailedMount                          pod/sample-webhook-deployment-5f65f8c764-8fcs5                                   MountVolume.SetUp failed for volume \"webhook-certs\" : failed to sync secret cache: timed out waiting for the condition\nwebhook-3683                         113s        Warning   FailedMount                          pod/sample-webhook-deployment-5f65f8c764-8fcs5                                   MountVolume.SetUp failed for volume \"default-token-gl2nk\" : failed to sync secret cache: timed out waiting for the condition\nwebhook-3683                         112s        Normal    Pulled                               pod/sample-webhook-deployment-5f65f8c764-8fcs5                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nwebhook-3683                         112s        Normal    Created                              pod/sample-webhook-deployment-5f65f8c764-8fcs5                                   Created container sample-webhook\nwebhook-3683                         112s        Normal    Started                              pod/sample-webhook-deployment-5f65f8c764-8fcs5                                   Started container sample-webhook\nwebhook-3683                         116s        Normal    SuccessfulCreate                     replicaset/sample-webhook-deployment-5f65f8c764                                  Created pod: sample-webhook-deployment-5f65f8c764-8fcs5\nwebhook-3683                         117s        Normal    ScalingReplicaSet                    deployment/sample-webhook-deployment                                             Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\nwebhook-6106                         48s         Normal    Scheduled                            pod/sample-webhook-deployment-5f65f8c764-pnmnq                                   Successfully assigned webhook-6106/sample-webhook-deployment-5f65f8c764-pnmnq to bootstrap-e2e-minion-group-9dh8\nwebhook-6106                         38s         Normal    Pulled                               pod/sample-webhook-deployment-5f65f8c764-pnmnq                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nwebhook-6106                         37s         Normal    Created                              pod/sample-webhook-deployment-5f65f8c764-pnmnq                                   Created container sample-webhook\nwebhook-6106                         35s         Normal    Started                              pod/sample-webhook-deployment-5f65f8c764-pnmnq                                   Started container sample-webhook\nwebhook-6106                         48s         Normal    SuccessfulCreate                     replicaset/sample-webhook-deployment-5f65f8c764                                  Created pod: sample-webhook-deployment-5f65f8c764-pnmnq\nwebhook-6106                         49s         Normal    ScalingReplicaSet                    deployment/sample-webhook-deployment                                             Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\nwebhook-6106                         23s         Normal    Scheduled                            pod/webhook-to-be-mutated                                                        Successfully assigned webhook-6106/webhook-to-be-mutated to bootstrap-e2e-minion-group-mnwl\nwebhook-808                          55s         Normal    Scheduled                            pod/sample-webhook-deployment-5f65f8c764-kjq8p                                   Successfully assigned webhook-808/sample-webhook-deployment-5f65f8c764-kjq8p to bootstrap-e2e-minion-group-9dh8\nwebhook-808                          46s         Normal    Pulled                               pod/sample-webhook-deployment-5f65f8c764-kjq8p                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nwebhook-808                          46s         Normal    Created                              pod/sample-webhook-deployment-5f65f8c764-kjq8p                                   Created container sample-webhook\nwebhook-808                          42s         Normal    Started                              pod/sample-webhook-deployment-5f65f8c764-kjq8p                                   Started container sample-webhook\nwebhook-808                          55s         Normal    SuccessfulCreate                     replicaset/sample-webhook-deployment-5f65f8c764                                  Created pod: sample-webhook-deployment-5f65f8c764-kjq8p\nwebhook-808                          55s         Normal    ScalingReplicaSet                    deployment/sample-webhook-deployment                                             Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\n"
Jan 16 07:05:55.081: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config get resourcequotas --all-namespaces'
Jan 16 07:05:55.573: INFO: stderr: ""
Jan 16 07:05:55.573: INFO: stdout: "NAMESPACE            NAME                   AGE     REQUEST                                                                                                                                                                                                                                                                                                                                                                                                                                                                     LIMIT\nkubectl-3606         rq1nameq2zjmwrmgm      1s      cpu: 0/5M                                                                                                                                                                                                                                                                                                                                                                                                                                                                   \nreplicaset-7952      condition-test         5m8s    pods: 2/2                                                                                                                                                                                                                                                                                                                                                                                                                                                                   \nresourcequota-2509   test-quota             68s     configmaps: 0/2, count/replicasets.apps: 0/5, cpu: 0/1, ephemeral-storage: 0/50Gi, gold.storageclass.storage.k8s.io/persistentvolumeclaims: 0/10, gold.storageclass.storage.k8s.io/requests.storage: 0/10Gi, memory: 0/500Mi, persistentvolumeclaims: 0/10, pods: 0/5, replicationcontrollers: 0/10, requests.example.com/dongle: 0/3, requests.storage: 0/10Gi, resourcequotas: 1/1, secrets: 1/10, services: 0/10, services.loadbalancers: 0/1, services.nodeports: 0/1   \nresourcequota-6329   quota-besteffort       4m23s   pods: 0/5                                                                                                                                                                                                                                                                                                                                                                                                                                                                   \nresourcequota-6329   quota-not-besteffort   4m20s   pods: 0/5                                                                                                                                                                                                                                                                                                                                                                                                                                                                   \nresourcequota-7914   test-quota             3m21s   configmaps: 0/2, count/replicasets.apps: 0/5, cpu: 0/1, ephemeral-storage: 0/50Gi, gold.storageclass.storage.k8s.io/persistentvolumeclaims: 0/10, gold.storageclass.storage.k8s.io/requests.storage: 0/10Gi, memory: 0/500Mi, persistentvolumeclaims: 0/10, pods: 0/5, replicationcontrollers: 0/10, requests.example.com/dongle: 0/3, requests.storage: 0/10Gi, resourcequotas: 1/1, secrets: 1/10, services: 0/10, services.loadbalancers: 0/1, services.nodeports: 0/1   \nresourcequota-8920   test-quota             3m      configmaps: 0/2, count/replicasets.apps: 0/5, cpu: 0/1, ephemeral-storage: 0/50Gi, gold.storageclass.storage.k8s.io/persistentvolumeclaims: 0/10, gold.storageclass.storage.k8s.io/requests.storage: 0/10Gi, memory: 0/500Mi, persistentvolumeclaims: 0/10, pods: 0/5, replicationcontrollers: 0/10, requests.example.com/dongle: 0/3, requests.storage: 0/10Gi, resourcequotas: 1/1, secrets: 1/10, services: 0/10, services.loadbalancers: 0/1, services.nodeports: 0/1   \n"
Jan 16 07:05:56.054: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config get secrets --all-namespaces'
Jan 16 07:05:56.945: INFO: stderr: ""
Jan 16 07:05:56.945: INFO: stdout: "NAMESPACE                            NAME                                                             TYPE                                  DATA   AGE\napparmor-8554                        default-token-tsm85                                              kubernetes.io/service-account-token   3      96s\ncadvisor-6993                        default-token-clp6g                                              kubernetes.io/service-account-token   3      2m33s\ncertificates-1841                    default-token-tcfzp                                              kubernetes.io/service-account-token   3      25s\nclientset-1128                       default-token-fj9gn                                              kubernetes.io/service-account-token   3      23s\nclientset-3728                       default-token-dlcz4                                              kubernetes.io/service-account-token   3      3m\nconfigmap-5408                       default-token-7874n                                              kubernetes.io/service-account-token   3      37s\nconfigmap-5446                       default-token-x7f7s                                              kubernetes.io/service-account-token   3      2m20s\nconfigmap-7699                       default-token-tkb77                                              kubernetes.io/service-account-token   3      92s\nconfigmap-8308                       default-token-jddn6                                              kubernetes.io/service-account-token   3      108s\ncontainer-lifecycle-hook-9891        default-token-h8xkl                                              kubernetes.io/service-account-token   3      46s\ncontainer-runtime-6893               default-token-k6zwv                                              kubernetes.io/service-account-token   3      2m57s\ncontainer-runtime-9188               default-token-dpwwr                                              kubernetes.io/service-account-token   3      19s\ncontainers-1118                      default-token-krnw5                                              kubernetes.io/service-account-token   3      18s\ncontainers-3208                      default-token-rnsrf                                              kubernetes.io/service-account-token   3      2m58s\ncontainers-8454                      default-token-rdlj2                                              kubernetes.io/service-account-token   3      3m31s\ncrd-publish-openapi-5746             default-token-mtq2q                                              kubernetes.io/service-account-token   3      3m14s\ncrd-publish-openapi-8031             default-token-zngkx                                              kubernetes.io/service-account-token   3      5m43s\ncrd-publish-openapi-8455             default-token-xjd75                                              kubernetes.io/service-account-token   3      46s\ncrd-webhook-3623                     default-token-wbztm                                              kubernetes.io/service-account-token   3      3m42s\ncronjob-3956                         default-token-9zr5z                                              kubernetes.io/service-account-token   3      2m22s\ncronjob-4939                         default-token-82jf6                                              kubernetes.io/service-account-token   3      72s\ncsi-mock-volumes-2747                csi-attacher-token-whsd5                                         kubernetes.io/service-account-token   3      2m\ncsi-mock-volumes-2747                csi-mock-token-zkw8k                                             kubernetes.io/service-account-token   3      115s\ncsi-mock-volumes-2747                csi-provisioner-token-6t66q                                      kubernetes.io/service-account-token   3      117s\ncsi-mock-volumes-2747                csi-resizer-token-2nzhm                                          kubernetes.io/service-account-token   3      116s\ncsi-mock-volumes-2747                default-token-8vjkv                                              kubernetes.io/service-account-token   3      2m2s\ncsi-mock-volumes-4068                default-token-7p8qk                                              kubernetes.io/service-account-token   3      7m\ncsi-mock-volumes-5993                default-token-qb5nj                                              kubernetes.io/service-account-token   3      5m47s\ncsi-mock-volumes-8708                csi-attacher-token-jcqst                                         kubernetes.io/service-account-token   3      13s\ncsi-mock-volumes-8708                csi-mock-token-45ns5                                             kubernetes.io/service-account-token   3      7s\ncsi-mock-volumes-8708                csi-provisioner-token-jk5f9                                      kubernetes.io/service-account-token   3      12s\ncsi-mock-volumes-8708                csi-resizer-token-4s6lg                                          kubernetes.io/service-account-token   3      10s\ncsi-mock-volumes-8708                default-token-rgxdp                                              kubernetes.io/service-account-token   3      16s\ncustom-resource-definition-283       default-token-m8tlw                                              kubernetes.io/service-account-token   3      2m40s\ncustom-resource-definition-7671      default-token-7lsft                                              kubernetes.io/service-account-token   3      76s\ndefault                              default-token-5lkfg                                              kubernetes.io/service-account-token   3      18m\ndeployment-1104                      default-token-72jfc                                              kubernetes.io/service-account-token   3      5m18s\ndeployment-2085                      default-token-sjfjr                                              kubernetes.io/service-account-token   3      2m59s\ndeployment-3447                      default-token-p9z2q                                              kubernetes.io/service-account-token   3      70s\ndeployment-7737                      default-token-mpc9q                                              kubernetes.io/service-account-token   3      79s\ndisruption-4879                      default-token-prptn                                              kubernetes.io/service-account-token   3      2m50s\ndisruption-5978                      default-token-xxwqd                                              kubernetes.io/service-account-token   3      18s\ndns-1173                             default-token-qvgjf                                              kubernetes.io/service-account-token   3      2m43s\ndns-6326                             default-token-ncnqv                                              kubernetes.io/service-account-token   3      4m18s\ndns-9982                             default-token-kprsw                                              kubernetes.io/service-account-token   3      2m24s\ndownward-api-3367                    default-token-jcq94                                              kubernetes.io/service-account-token   3      3m39s\ndownward-api-6746                    default-token-qnxgs                                              kubernetes.io/service-account-token   3      2m40s\ndownward-api-6943                    default-token-6n7bn                                              kubernetes.io/service-account-token   3      3m11s\ne2e-kubelet-etc-hosts-768            default-token-cs5tt                                              kubernetes.io/service-account-token   3      3s\ne2e-privileged-pod-8651              default-token-rj598                                              kubernetes.io/service-account-token   3      2m39s\nemptydir-1365                        default-token-66lq6                                              kubernetes.io/service-account-token   3      4m13s\nemptydir-5078                        default-token-47bmk                                              kubernetes.io/service-account-token   3      4m35s\nemptydir-7186                        default-token-vqbl2                                              kubernetes.io/service-account-token   3      2m43s\nemptydir-920                         default-token-gjmgw                                              kubernetes.io/service-account-token   3      109s\nflexvolume-7308                      default-token-nt6g8                                              kubernetes.io/service-account-token   3      89s\nflexvolume-9357                      default-token-8wh4v                                              kubernetes.io/service-account-token   3      5m21s\ngc-1557                              default-token-jr74b                                              kubernetes.io/service-account-token   3      2m38s\ngc-2159                              default-token-rfz6p                                              kubernetes.io/service-account-token   3      50s\ngc-7576                              default-token-fxjmc                                              kubernetes.io/service-account-token   3      43s\nhostpath-7207                        default-token-nzcmz                                              kubernetes.io/service-account-token   3      94s\nhostpath-9363                        default-token-669fv                                              kubernetes.io/service-account-token   3      4m14s\ninit-container-7210                  default-token-86wvs                                              kubernetes.io/service-account-token   3      3m27s\njob-8075                             default-token-l82cw                                              kubernetes.io/service-account-token   3      2m49s\nkube-node-lease                      default-token-8psb7                                              kubernetes.io/service-account-token   3      18m\nkube-public                          default-token-vrbql                                              kubernetes.io/service-account-token   3      18m\nkube-system                          attachdetach-controller-token-m598v                              kubernetes.io/service-account-token   3      18m\nkube-system                          certificate-controller-token-tkbxw                               kubernetes.io/service-account-token   3      18m\nkube-system                          cloud-provider-token-ngfmp                                       kubernetes.io/service-account-token   3      18m\nkube-system                          clusterrole-aggregation-controller-token-4mxs6                   kubernetes.io/service-account-token   3      18m\nkube-system                          coredns-token-gfsp7                                              kubernetes.io/service-account-token   3      18m\nkube-system                          cronjob-controller-token-xg42t                                   kubernetes.io/service-account-token   3      18m\nkube-system                          daemon-set-controller-token-lbq4j                                kubernetes.io/service-account-token   3      18m\nkube-system                          default-token-4t8cz                                              kubernetes.io/service-account-token   3      18m\nkube-system                          deployment-controller-token-ffndk                                kubernetes.io/service-account-token   3      18m\nkube-system                          disruption-controller-token-mpmkg                                kubernetes.io/service-account-token   3      18m\nkube-system                          endpoint-controller-token-4vvmm                                  kubernetes.io/service-account-token   3      18m\nkube-system                          event-exporter-sa-token-xfg9x                                    kubernetes.io/service-account-token   3      18m\nkube-system                          expand-controller-token-bwh59                                    kubernetes.io/service-account-token   3      18m\nkube-system                          fluentd-gcp-scaler-token-v269d                                   kubernetes.io/service-account-token   3      18m\nkube-system                          fluentd-gcp-token-vgcct                                          kubernetes.io/service-account-token   3      18m\nkube-system                          generic-garbage-collector-token-h5gf2                            kubernetes.io/service-account-token   3      18m\nkube-system                          horizontal-pod-autoscaler-token-j9wgv                            kubernetes.io/service-account-token   3      18m\nkube-system                          job-controller-token-2sq42                                       kubernetes.io/service-account-token   3      18m\nkube-system                          kube-dns-autoscaler-token-5ksv4                                  kubernetes.io/service-account-token   3      18m\nkube-system                          kubernetes-dashboard-certs                                       Opaque                                0      18m\nkube-system                          kubernetes-dashboard-key-holder                                  Opaque                                2      18m\nkube-system                          kubernetes-dashboard-token-q8fqp                                 kubernetes.io/service-account-token   3      18m\nkube-system                          metadata-proxy-token-ts8rf                                       kubernetes.io/service-account-token   3      18m\nkube-system                          metrics-server-token-lhsv7                                       kubernetes.io/service-account-token   3      18m\nkube-system                          namespace-controller-token-d6fnc                                 kubernetes.io/service-account-token   3      18m\nkube-system                          node-controller-token-6b56m                                      kubernetes.io/service-account-token   3      18m\nkube-system                          persistent-volume-binder-token-26qxf                             kubernetes.io/service-account-token   3      18m\nkube-system                          pod-garbage-collector-token-g6p4p                                kubernetes.io/service-account-token   3      18m\nkube-system                          pv-protection-controller-token-zkklj                             kubernetes.io/service-account-token   3      18m\nkube-system                          pvc-protection-controller-token-vkdnc                            kubernetes.io/service-account-token   3      18m\nkube-system                          replicaset-controller-token-5s6px                                kubernetes.io/service-account-token   3      18m\nkube-system                          replication-controller-token-2rnf6                               kubernetes.io/service-account-token   3      18m\nkube-system                          resourcequota-controller-token-jqt2g                             kubernetes.io/service-account-token   3      18m\nkube-system                          route-controller-token-lpmc5                                     kubernetes.io/service-account-token   3      18m\nkube-system                          service-account-controller-token-t4g67                           kubernetes.io/service-account-token   3      18m\nkube-system                          service-controller-token-srlzg                                   kubernetes.io/service-account-token   3      18m\nkube-system                          statefulset-controller-token-kld4s                               kubernetes.io/service-account-token   3      18m\nkube-system                          ttl-controller-token-tfq4w                                       kubernetes.io/service-account-token   3      18m\nkube-system                          volume-snapshot-controller-token-486hl                           kubernetes.io/service-account-token   3      18m\nkubectl-1384                         default-token-vqjfx                                              kubernetes.io/service-account-token   3      3m9s\nkubectl-1565                         default-token-dc8kk                                              kubernetes.io/service-account-token   3      2m11s\nkubectl-1642                         default-token-58qbq                                              kubernetes.io/service-account-token   3      113s\nkubectl-2684                         default-token-59qj5                                              kubernetes.io/service-account-token   3      4m49s\nkubectl-3606                         default-token-xvwfs                                              kubernetes.io/service-account-token   3      21s\nkubectl-3606                         secret1q2zjmwrmgm                                                Opaque                                1      1s\nkubectl-4683                         default-token-jq9pl                                              kubernetes.io/service-account-token   3      3m32s\nkubectl-499                          default-token-5jvcv                                              kubernetes.io/service-account-token   3      3m21s\nkubectl-621                          default-token-sbppm                                              kubernetes.io/service-account-token   3      4m52s\nkubectl-6770                         default-token-rz8rw                                              kubernetes.io/service-account-token   3      5s\nkubectl-8814                         default-token-859qx                                              kubernetes.io/service-account-token   3      11s\nkubectl-8926                         default-token-pknv9                                              kubernetes.io/service-account-token   3      5m42s\nkubectl-9698                         default-token-ddkq7                                              kubernetes.io/service-account-token   3      112s\nkubelet-2383                         default-token-rjw8p                                              kubernetes.io/service-account-token   3      5m4s\nkubelet-test-4199                    default-token-x4hsx                                              kubernetes.io/service-account-token   3      4m6s\nlease-test-5658                      default-token-x7lg6                                              kubernetes.io/service-account-token   3      2m41s\nmetrics-grabber-5904                 default-token-wl29l                                              kubernetes.io/service-account-token   3      3m45s\nmulti-az-9180                        default-token-wgxrk                                              kubernetes.io/service-account-token   3      98s\nnettest-1801                         default-token-cv6jm                                              kubernetes.io/service-account-token   3      3m28s\nnettest-4067                         default-token-nhzdl                                              kubernetes.io/service-account-token   3      4m20s\nnettest-6829                         default-token-2gk9k                                              kubernetes.io/service-account-token   3      7m46s\nnode-lease-test-7998                 default-token-cj6hj                                              kubernetes.io/service-account-token   3      15s\nnode-lease-test-8223                 default-token-vctbn                                              kubernetes.io/service-account-token   3      87s\npersistent-local-volumes-test-1099   default-token-sr9fp                                              kubernetes.io/service-account-token   3      2m6s\npersistent-local-volumes-test-1231   default-token-qpfm9                                              kubernetes.io/service-account-token   3      2m3s\npersistent-local-volumes-test-2922   default-token-wns7f                                              kubernetes.io/service-account-token   3      4m48s\npersistent-local-volumes-test-4306   default-token-2xkzd                                              kubernetes.io/service-account-token   3      3m54s\npersistent-local-volumes-test-4644   default-token-6m6cq                                              kubernetes.io/service-account-token   3      85s\npersistent-local-volumes-test-5270   default-token-jpxmh                                              kubernetes.io/service-account-token   3      92s\npersistent-local-volumes-test-5643   default-token-pgch7                                              kubernetes.io/service-account-token   3      5m28s\npersistent-local-volumes-test-7600   default-token-8shbg                                              kubernetes.io/service-account-token   3      2m14s\npersistent-local-volumes-test-776    default-token-76djq                                              kubernetes.io/service-account-token   3      118s\npod-network-test-2281                default-token-6jzfm                                              kubernetes.io/service-account-token   3      4m56s\npods-4409                            default-token-lpqk4                                              kubernetes.io/service-account-token   3      2m25s\nport-forwarding-4477                 default-token-tfn8s                                              kubernetes.io/service-account-token   3      7s\nport-forwarding-5553                 default-token-h62fv                                              kubernetes.io/service-account-token   3      3m7s\nprojected-1311                       default-token-ft6bx                                              kubernetes.io/service-account-token   3      4m6s\nprojected-3074                       default-token-8gvvr                                              kubernetes.io/service-account-token   3      118s\nprojected-4399                       default-token-jns5q                                              kubernetes.io/service-account-token   3      3m52s\nprojected-7590                       default-token-29gnd                                              kubernetes.io/service-account-token   3      4m11s\nprojected-7978                       default-token-pts88                                              kubernetes.io/service-account-token   3      90s\nprojected-8451                       default-token-78pkd                                              kubernetes.io/service-account-token   3      2m33s\nprojected-8451                       projected-secret-test-map-f17bea75-34b8-4dc6-b485-8a224d042d6c   Opaque                                3      2m31s\nprojected-8646                       default-token-sklhq                                              kubernetes.io/service-account-token   3      5m16s\nprojected-9614                       default-token-wr9wz                                              kubernetes.io/service-account-token   3      3m9s\nprovisioning-1097                    default-token-fhpxc                                              kubernetes.io/service-account-token   3      5m20s\nprovisioning-114                     default-token-kv9mk                                              kubernetes.io/service-account-token   3      3m37s\nprovisioning-1699                    default-token-pbbb6                                              kubernetes.io/service-account-token   3      2m47s\nprovisioning-1739                    default-token-wd8jj                                              kubernetes.io/service-account-token   3      111s\nprovisioning-228                     default-token-27wpq                                              kubernetes.io/service-account-token   3      3m26s\nprovisioning-2441                    default-token-vth2g                                              kubernetes.io/service-account-token   3      91s\nprovisioning-2764                    default-token-r7vbd                                              kubernetes.io/service-account-token   3      4m19s\nprovisioning-3047                    default-token-m9q5t                                              kubernetes.io/service-account-token   3      2m4s\nprovisioning-3210                    default-token-6wszm                                              kubernetes.io/service-account-token   3      55s\nprovisioning-3221                    default-token-j2wz6                                              kubernetes.io/service-account-token   3      99s\nprovisioning-3944                    default-token-6r84q                                              kubernetes.io/service-account-token   3      2m57s\nprovisioning-4054                    default-token-5mnnk                                              kubernetes.io/service-account-token   3      4m57s\nprovisioning-4438                    default-token-v2t79                                              kubernetes.io/service-account-token   3      65s\nprovisioning-5035                    default-token-8qzds                                              kubernetes.io/service-account-token   3      4m32s\nprovisioning-5123                    default-token-z268f                                              kubernetes.io/service-account-token   3      109s\nprovisioning-5341                    default-token-zq4f6                                              kubernetes.io/service-account-token   3      2m27s\nprovisioning-5471                    default-token-9p2ff                                              kubernetes.io/service-account-token   3      19s\nprovisioning-5540                    default-token-kq8qp                                              kubernetes.io/service-account-token   3      4m57s\nprovisioning-56                      default-token-8t284                                              kubernetes.io/service-account-token   3      27s\nprovisioning-5674                    default-token-jcbv2                                              kubernetes.io/service-account-token   3      6m8s\nprovisioning-5995                    default-token-lqd8n                                              kubernetes.io/service-account-token   3      5m25s\nprovisioning-6146                    default-token-vbwqc                                              kubernetes.io/service-account-token   3      5m39s\nprovisioning-684                     default-token-bfgxz                                              kubernetes.io/service-account-token   3      103s\nprovisioning-7323                    default-token-t6vsl                                              kubernetes.io/service-account-token   3      4s\nprovisioning-7575                    default-token-7nq7q                                              kubernetes.io/service-account-token   3      2m\nprovisioning-8552                    default-token-92j98                                              kubernetes.io/service-account-token   3      21s\nprovisioning-8556                    default-token-h7gxh                                              kubernetes.io/service-account-token   3      110s\nprovisioning-8592                    default-token-jmhw7                                              kubernetes.io/service-account-token   3      91s\nprovisioning-8761                    default-token-c4m5j                                              kubernetes.io/service-account-token   3      57s\nprovisioning-877                     default-token-srk5q                                              kubernetes.io/service-account-token   3      23s\nprovisioning-8885                    default-token-89gj9                                              kubernetes.io/service-account-token   3      2s\nprovisioning-8923                    default-token-jgg2p                                              kubernetes.io/service-account-token   3      3m46s\nprovisioning-8957                    default-token-dk4sk                                              kubernetes.io/service-account-token   3      2m35s\nprovisioning-9002                    default-token-mnlrj                                              kubernetes.io/service-account-token   3      2m48s\nprovisioning-9037                    default-token-vg7pj                                              kubernetes.io/service-account-token   3      5m44s\nprovisioning-9293                    default-token-psmqm                                              kubernetes.io/service-account-token   3      63s\nproxy-3665                           default-token-qc6mm                                              kubernetes.io/service-account-token   3      4m59s\npv-1828                              default-token-p9c46                                              kubernetes.io/service-account-token   3      3m12s\npv-2361                              default-token-pzm64                                              kubernetes.io/service-account-token   3      4m24s\npv-3632                              default-token-vfvzw                                              kubernetes.io/service-account-token   3      5m12s\npv-protection-3203                   default-token-6vftq                                              kubernetes.io/service-account-token   3      3m40s\npvc-protection-7469                  default-token-mrjw4                                              kubernetes.io/service-account-token   3      3m12s\nreplication-controller-8648          default-token-fsl74                                              kubernetes.io/service-account-token   3      3m10s\nresourcequota-2509                   default-token-8ld8v                                              kubernetes.io/service-account-token   3      75s\nresourcequota-6329                   default-token-j6d67                                              kubernetes.io/service-account-token   3      4m25s\nresourcequota-7914                   default-token-z54jj                                              kubernetes.io/service-account-token   3      3m29s\nresourcequota-8920                   default-token-tddfl                                              kubernetes.io/service-account-token   3      3m8s\nruntimeclass-8278                    default-token-swlgw                                              kubernetes.io/service-account-token   3      3m16s\nsecrets-4455                         default-token-dxwhw                                              kubernetes.io/service-account-token   3      2m23s\nsecrets-4455                         secret-test-b4c06bc4-95a3-4916-9811-f04ca41a7387                 Opaque                                3      2m22s\nsecrets-8728                         default-token-kqpgd                                              kubernetes.io/service-account-token   3      3m31s\nsecrets-8728                         secret-test-718d0e96-9595-49e7-bcbe-3c7ee5ce0c18                 Opaque                                3      3m30s\nsecrets-9272                         default-token-s68js                                              kubernetes.io/service-account-token   3      84s\nsecrets-9272                         secret-test-map-66786ae3-8526-4b43-a06f-62ede2783a7f             Opaque                                3      83s\nsecurity-context-test-1170           default-token-99svh                                              kubernetes.io/service-account-token   3      3m27s\nsecurity-context-test-3619           default-token-qb2ch                                              kubernetes.io/service-account-token   3      66s\nsecurity-context-test-8557           default-token-76t7x                                              kubernetes.io/service-account-token   3      4m10s\nservices-2709                        default-token-xrrpf                                              kubernetes.io/service-account-token   3      2m\nservices-316                         default-token-76cp6                                              kubernetes.io/service-account-token   3      5m20s\nservices-425                         default-token-gqs5n                                              kubernetes.io/service-account-token   3      81s\nservices-8231                        default-token-q8w4l                                              kubernetes.io/service-account-token   3      4m\nstatefulset-1883                     default-token-zppkg                                              kubernetes.io/service-account-token   3      5s\nstatefulset-3296                     default-token-dx7gq                                              kubernetes.io/service-account-token   3      6m30s\nstatefulset-3343                     default-token-jh77c                                              kubernetes.io/service-account-token   3      8m9s\nstatefulset-5026                     default-token-29cnw                                              kubernetes.io/service-account-token   3      6m34s\nsubpath-1340                         default-token-74f22                                              kubernetes.io/service-account-token   3      72s\nsubpath-1340                         my-secret                                                        Opaque                                1      72s\nsvcaccounts-7539                     default-token-7w2f4                                              kubernetes.io/service-account-token   3      103s\nsvcaccounts-7539                     mount-test-token-8p8sd                                           kubernetes.io/service-account-token   3      102s\nsvcaccounts-8513                     default-token-q4b2c                                              kubernetes.io/service-account-token   3      3m13s\nsvcaccounts-8513                     default-token-s9pq6                                              kubernetes.io/service-account-token   3      3m18s\nsvcaccounts-8786                     default-token-frm79                                              kubernetes.io/service-account-token   3      3m15s\nsvcaccounts-8786                     mount-token-2gn8f                                                kubernetes.io/service-account-token   3      3m13s\nsvcaccounts-8786                     nomount-token-zmxfs                                              kubernetes.io/service-account-token   3      3m13s\nsysctl-1230                          default-token-5wv2c                                              kubernetes.io/service-account-token   3      4m13s\nsysctl-7584                          default-token-wfjlc                                              kubernetes.io/service-account-token   3      71s\ntables-2128                          default-token-w8m7j                                              kubernetes.io/service-account-token   3      97s\ntables-2771                          default-token-xfdbv                                              kubernetes.io/service-account-token   3      25s\ntables-8261                          default-token-kgvbj                                              kubernetes.io/service-account-token   3      83s\ntopology-9038                        default-token-6j7ct                                              kubernetes.io/service-account-token   3      5m58s\nvolume-1056                          default-token-lfd9x                                              kubernetes.io/service-account-token   3      66s\nvolume-1444                          default-token-d2kxr                                              kubernetes.io/service-account-token   3      6m31s\nvolume-1513                          default-token-rx4gd                                              kubernetes.io/service-account-token   3      2m26s\nvolume-1956                          default-token-x2sfm                                              kubernetes.io/service-account-token   3      6m36s\nvolume-2019                          default-token-tkh8t                                              kubernetes.io/service-account-token   3      3m17s\nvolume-2270                          default-token-dzwjf                                              kubernetes.io/service-account-token   3      64s\nvolume-2574                          default-token-zmvmz                                              kubernetes.io/service-account-token   3      5m29s\nvolume-2639                          default-token-6npxv                                              kubernetes.io/service-account-token   3      4m14s\nvolume-2777                          default-token-v7thb                                              kubernetes.io/service-account-token   3      95s\nvolume-2840                          default-token-thwd2                                              kubernetes.io/service-account-token   3      5m14s\nvolume-3461                          default-token-c9wxr                                              kubernetes.io/service-account-token   3      5m36s\nvolume-3704                          default-token-7dl4s                                              kubernetes.io/service-account-token   3      117s\nvolume-3991                          default-token-z2r6g                                              kubernetes.io/service-account-token   3      2m30s\nvolume-3995                          default-token-cnnnf                                              kubernetes.io/service-account-token   3      2m32s\nvolume-4490                          default-token-9qwt5                                              kubernetes.io/service-account-token   3      5m25s\nvolume-5215                          default-token-fsw7k                                              kubernetes.io/service-account-token   3      116s\nvolume-5683                          default-token-b8lsh                                              kubernetes.io/service-account-token   3      2m8s\nvolume-5853                          default-token-9zxmg                                              kubernetes.io/service-account-token   3      93s\nvolume-6127                          default-token-m97xq                                              kubernetes.io/service-account-token   3      95s\nvolume-6940                          default-token-h25pz                                              kubernetes.io/service-account-token   3      104s\nvolume-7317                          default-token-ck8qc                                              kubernetes.io/service-account-token   3      103s\nvolume-746                           default-token-xj8wt                                              kubernetes.io/service-account-token   3      3m37s\nvolume-8392                          default-token-h5ss4                                              kubernetes.io/service-account-token   3      3m8s\nvolume-8498                          default-token-5cp8l                                              kubernetes.io/service-account-token   3      8s\nvolume-8620                          default-token-49ktg                                              kubernetes.io/service-account-token   3      2m59s\nvolume-9179                          default-token-bgkbv                                              kubernetes.io/service-account-token   3      57s\nvolume-9340                          default-token-mswmf                                              kubernetes.io/service-account-token   3      4m56s\nvolume-9781                          default-token-8h75n                                              kubernetes.io/service-account-token   3      3m38s\nvolume-9822                          default-token-ll9n7                                              kubernetes.io/service-account-token   3      2s\nvolume-expand-2836                   default-token-9n8rn                                              kubernetes.io/service-account-token   3      51s\nvolume-placement-5340                default-token-f26v4                                              kubernetes.io/service-account-token   3      105s\nvolume-placement-9715                default-token-cbl4h                                              kubernetes.io/service-account-token   3      9s\nvolume-provisioning-4988             default-token-ts65p                                              kubernetes.io/service-account-token   3      69s\nvolumemode-1240                      default-token-6t28v                                              kubernetes.io/service-account-token   3      5m53s\nvolumemode-2221                      default-token-9r6fg                                              kubernetes.io/service-account-token   3      2m23s\nvolumemode-2444                      default-token-ls5gx                                              kubernetes.io/service-account-token   3      116s\nvolumemode-4012                      csi-attacher-token-7gsnn                                         kubernetes.io/service-account-token   3      70s\nvolumemode-4012                      csi-provisioner-token-j5lz5                                      kubernetes.io/service-account-token   3      70s\nvolumemode-4012                      csi-resizer-token-b9j7d                                          kubernetes.io/service-account-token   3      69s\nvolumemode-4012                      csi-snapshotter-token-fjvn4                                      kubernetes.io/service-account-token   3      69s\nvolumemode-4012                      default-token-n588k                                              kubernetes.io/service-account-token   3      72s\nvolumemode-443                       default-token-zm2zt                                              kubernetes.io/service-account-token   3      4m16s\nvolumemode-451                       default-token-2c9hm                                              kubernetes.io/service-account-token   3      2m47s\nvolumemode-7712                      default-token-2rwpv                                              kubernetes.io/service-account-token   3      3m27s\nvolumemode-807                       default-token-pnx5p                                              kubernetes.io/service-account-token   3      2m18s\nwatch-3223                           default-token-wdl8d                                              kubernetes.io/service-account-token   3      2m21s\nwebhook-3219-markers                 default-token-nvk72                                              kubernetes.io/service-account-token   3      4m13s\nwebhook-3219                         default-token-zw92z                                              kubernetes.io/service-account-token   3      4m16s\nwebhook-3426-markers                 default-token-6clrv                                              kubernetes.io/service-account-token   3      14s\nwebhook-3426                         default-token-hwdhj                                              kubernetes.io/service-account-token   3      18s\nwebhook-3426                         sample-webhook-secret                                            Opaque                                2      14s\nwebhook-3683-markers                 default-token-t8vjc                                              kubernetes.io/service-account-token   3      2m1s\nwebhook-3683                         default-token-gl2nk                                              kubernetes.io/service-account-token   3      2m5s\nwebhook-6106-markers                 default-token-frt9z                                              kubernetes.io/service-account-token   3      53s\nwebhook-6106                         default-token-m8rvm                                              kubernetes.io/service-account-token   3      56s\nwebhook-808-markers                  default-token-vgdbw                                              kubernetes.io/service-account-token   3      58s\nwebhook-808                          default-token-zkj6v                                              kubernetes.io/service-account-token   3      60s\nzone-support-1256                    default-token-z7nxq                                              kubernetes.io/service-account-token   3      92s\nzone-support-2713                    default-token-vdpcl                                              kubernetes.io/service-account-token   3      2m51s\nzone-support-277                     default-token-8h798                                              kubernetes.io/service-account-token   3      65s\nzone-support-3598                    default-token-bsmg7                                              kubernetes.io/service-account-token   3      4m17s\nzone-support-3736                    default-token-rb5wl                                              kubernetes.io/service-account-token   3      48s\nzone-support-5468                    default-token-rjxw9                                              kubernetes.io/service-account-token   3      84s\n"
Jan 16 07:05:57.685: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config get pods --all-namespaces'
Jan 16 07:05:58.436: INFO: stderr: ""
Jan 16 07:05:58.436: INFO: stdout: "NAMESPACE                            NAME                                                             READY   STATUS                  RESTARTS   AGE\napparmor-8554                        apparmor-loader-t4qxm                                            1/1     Running                 0          96s\napparmor-8554                        test-apparmor-b4qp7                                              0/1     Completed               0          78s\ncontainer-lifecycle-hook-9891        pod-handle-http-request                                          1/1     Running                 0          48s\ncsi-mock-volumes-2747                csi-mockplugin-0                                                 3/3     Running                 0          113s\ncsi-mock-volumes-2747                csi-mockplugin-resizer-0                                         1/1     Running                 0          113s\ncsi-mock-volumes-8708                csi-mockplugin-0                                                 0/3     ContainerCreating       0          7s\ncsi-mock-volumes-8708                csi-mockplugin-attacher-0                                        1/1     Running                 0          7s\ncsi-mock-volumes-8708                csi-mockplugin-resizer-0                                         0/1     ContainerCreating       0          6s\ndeployment-1104                      test-rolling-update-deployment-67cf4f6444-6jzjg                  1/1     Running                 0          5m7s\ndeployment-2085                      test-cleanup-deployment-55ffc6b7b6-9t6sx                         1/1     Running                 0          2m53s\ndeployment-3447                      test-rollover-deployment-574d6dfbff-bn2zt                        1/1     Running                 0          58s\ndeployment-7737                      webserver-6f4df6d875-kpr6t                                       1/1     Running                 0          12s\ndeployment-7737                      webserver-b44845bb-b969m                                         0/1     ContainerCreating       0          5s\ndeployment-7737                      webserver-b44845bb-frljb                                         1/1     Running                 0          27s\ndeployment-7737                      webserver-b44845bb-x5ngz                                         0/1     ContainerCreating       0          6s\ndisruption-4879                      pod-1                                                            1/1     Running                 0          2m50s\ndisruption-4879                      pod-2                                                            1/1     Running                 0          2m50s\ndisruption-5978                      pod-0                                                            1/1     Running                 0          19s\ndisruption-5978                      pod-1                                                            1/1     Running                 0          19s\ne2e-kubelet-etc-hosts-768            test-pod                                                         0/3     ContainerCreating       0          3s\ne2e-privileged-pod-8651              privileged-pod                                                   2/2     Running                 0          2m40s\ngc-1557                              simpletest.deployment-fb5f5c75d-dp2h2                            1/1     Running                 0          2m37s\ngc-1557                              simpletest.deployment-fb5f5c75d-wr4dn                            1/1     Running                 0          2m37s\ngc-2159                              simpletest-rc-to-be-deleted-b4kf2                                1/1     Running                 0          49s\ngc-2159                              simpletest-rc-to-be-deleted-fhc4f                                1/1     Running                 0          49s\ngc-2159                              simpletest-rc-to-be-deleted-fkf6n                                1/1     Running                 0          49s\ngc-2159                              simpletest-rc-to-be-deleted-fvsd9                                1/1     Running                 0          49s\ngc-2159                              simpletest-rc-to-be-deleted-gtg7x                                1/1     Running                 0          50s\ninit-container-7210                  pod-init-ced300d7-ca51-4c93-915d-ed22a61ca51c                    1/1     Running                 0          3m29s\nkube-system                          coredns-65567c7b57-nhgsn                                         1/1     Running                 0          17m\nkube-system                          coredns-65567c7b57-vfjw5                                         1/1     Running                 0          18m\nkube-system                          etcd-empty-dir-cleanup-bootstrap-e2e-master                      1/1     Running                 0          17m\nkube-system                          etcd-server-bootstrap-e2e-master                                 1/1     Running                 0          18m\nkube-system                          etcd-server-events-bootstrap-e2e-master                          1/1     Running                 0          18m\nkube-system                          event-exporter-v0.3.1-747b47fcd-ml7vh                            2/2     Running                 0          18m\nkube-system                          fluentd-gcp-scaler-76d9c77b4d-zpv4t                              1/1     Running                 0          18m\nkube-system                          fluentd-gcp-v3.2.0-4qwt9                                         2/2     Running                 0          17m\nkube-system                          fluentd-gcp-v3.2.0-4stqh                                         2/2     Running                 0          17m\nkube-system                          fluentd-gcp-v3.2.0-chktk                                         2/2     Running                 0          17m\nkube-system                          fluentd-gcp-v3.2.0-jbglh                                         2/2     Running                 0          17m\nkube-system                          fluentd-gcp-v3.2.0-vnzbs                                         2/2     Running                 0          17m\nkube-system                          kube-addon-manager-bootstrap-e2e-master                          1/1     Running                 0          17m\nkube-system                          kube-apiserver-bootstrap-e2e-master                              1/1     Running                 1          18m\nkube-system                          kube-controller-manager-bootstrap-e2e-master                     1/1     Running                 0          18m\nkube-system                          kube-dns-autoscaler-65bc6d4889-mzf7g                             1/1     Running                 0          10m\nkube-system                          kube-proxy-bootstrap-e2e-minion-group-5wcz                       1/1     Running                 0          18m\nkube-system                          kube-proxy-bootstrap-e2e-minion-group-9dh8                       1/1     Running                 0          18m\nkube-system                          kube-proxy-bootstrap-e2e-minion-group-mnwl                       1/1     Running                 0          18m\nkube-system                          kube-proxy-bootstrap-e2e-minion-group-n0jl                       1/1     Running                 0          18m\nkube-system                          kube-scheduler-bootstrap-e2e-master                              1/1     Running                 0          18m\nkube-system                          kubernetes-dashboard-7778f8b456-dr9n4                            1/1     Running                 0          18m\nkube-system                          l7-default-backend-678889f899-mzk9g                              1/1     Running                 0          18m\nkube-system                          l7-lb-controller-bootstrap-e2e-master                            1/1     Running                 3          18m\nkube-system                          metadata-proxy-v0.1-8q8nt                                        2/2     Running                 0          18m\nkube-system                          metadata-proxy-v0.1-d56tj                                        2/2     Running                 0          18m\nkube-system                          metadata-proxy-v0.1-l84kl                                        2/2     Running                 0          18m\nkube-system                          metadata-proxy-v0.1-pnxbm                                        2/2     Running                 0          18m\nkube-system                          metadata-proxy-v0.1-xvc29                                        2/2     Running                 0          18m\nkube-system                          metrics-server-v0.3.6-5f859c87d6-tqlh6                           2/2     Running                 0          17m\nkube-system                          volume-snapshot-controller-0                                     1/1     Running                 0          18m\nkubectl-1384                         agnhost-master-l2hjd                                             1/1     Running                 0          3m8s\nkubectl-3606                         pod1q2zjmwrmgm                                                   0/1     Pending                 0          1s\nkubectl-4683                         agnhost-master-bzp59                                             1/1     Running                 0          3m29s\nkubectl-4683                         agnhost-master-qzgtx                                             1/1     Running                 0          3m30s\nkubectl-6770                         e2e-test-httpd-rc-5chmd                                          0/1     ContainerCreating       0          5s\nkubectl-8814                         httpd                                                            0/1     Running                 0          9s\nkubectl-8926                         update-demo-kitten-bjfbf                                         1/1     Running                 0          5m16s\nkubectl-8926                         update-demo-kitten-bksjj                                         1/1     Running                 0          5m1s\nkubelet-test-4199                    busybox-readonly-fs5d82fcc0-5576-4988-bd25-755110bbfde8          1/1     Running                 0          4m7s\nnettest-1801                         netserver-0                                                      1/1     Running                 0          3m29s\nnettest-1801                         netserver-1                                                      1/1     Running                 0          3m29s\nnettest-1801                         netserver-2                                                      1/1     Running                 0          3m29s\nnettest-1801                         netserver-3                                                      1/1     Running                 0          3m29s\nnettest-1801                         test-container-pod                                               1/1     Running                 0          2m58s\nnettest-4067                         host-test-container-pod                                          1/1     Running                 0          3m48s\nnettest-4067                         netserver-0                                                      1/1     Running                 0          4m20s\nnettest-4067                         netserver-1                                                      1/1     Running                 0          4m20s\nnettest-4067                         netserver-2                                                      1/1     Running                 0          4m19s\nnettest-4067                         netserver-3                                                      1/1     Running                 0          4m19s\nnettest-4067                         test-container-pod                                               1/1     Running                 0          3m48s\nnettest-6829                         netserver-1                                                      1/1     Running                 0          7m48s\nnettest-6829                         netserver-2                                                      1/1     Running                 0          7m47s\nnettest-6829                         netserver-3                                                      1/1     Running                 0          7m47s\nnettest-6829                         test-container-pod                                               1/1     Running                 0          7m14s\npersistent-local-volumes-test-1099   hostexec-bootstrap-e2e-minion-group-5wcz-zmpxq                   1/1     Running                 0          2m4s\npersistent-local-volumes-test-1231   hostexec-bootstrap-e2e-minion-group-5wcz-8czgs                   1/1     Running                 0          2m2s\npersistent-local-volumes-test-2922   hostexec-bootstrap-e2e-minion-group-5wcz-smxv2                   1/1     Running                 0          4m48s\npersistent-local-volumes-test-4306   hostexec-bootstrap-e2e-minion-group-5wcz-hdfdg                   1/1     Running                 0          3m53s\npersistent-local-volumes-test-4644   hostexec-bootstrap-e2e-minion-group-5wcz-fxzlr                   1/1     Running                 0          86s\npersistent-local-volumes-test-5270   hostexec-bootstrap-e2e-minion-group-5wcz-l8bx7                   1/1     Running                 0          92s\npersistent-local-volumes-test-5643   hostexec-bootstrap-e2e-minion-group-5wcz-ndgl6                   1/1     Running                 0          5m28s\npersistent-local-volumes-test-7600   hostexec-bootstrap-e2e-minion-group-5wcz-4xxc4                   1/1     Running                 0          2m13s\npersistent-local-volumes-test-776    hostexec-bootstrap-e2e-minion-group-5wcz-lmzlw                   1/1     Running                 0          118s\npod-network-test-2281                netserver-0                                                      1/1     Running                 0          4m56s\npod-network-test-2281                netserver-1                                                      1/1     Running                 0          4m55s\npod-network-test-2281                netserver-2                                                      1/1     Running                 0          4m55s\npod-network-test-2281                netserver-3                                                      1/1     Running                 0          4m55s\npod-network-test-2281                test-container-pod                                               1/1     Running                 0          4m6s\npods-4409                            server-envvars-d26dce68-0196-46e2-8967-d10506800372              1/1     Running                 0          2m25s\nport-forwarding-4477                 pfpod                                                            0/2     ContainerCreating       0          7s\nport-forwarding-5553                 pfpod                                                            0/2     Completed               0          3m6s\nprojected-3074                       annotationupdate95d66fe2-dba4-4448-8c3f-19d0ca650f41             1/1     Running                 0          118s\nprovisioning-4438                    csi-hostpath-attacher-0                                          1/1     Running                 0          56s\nprovisioning-4438                    csi-hostpath-provisioner-0                                       1/1     Running                 0          57s\nprovisioning-4438                    csi-hostpath-resizer-0                                           1/1     Running                 0          57s\nprovisioning-4438                    csi-hostpathplugin-0                                             3/3     Running                 0          58s\nprovisioning-4438                    csi-snapshotter-0                                                1/1     Running                 0          57s\nprovisioning-5471                    hostexec-bootstrap-e2e-minion-group-n0jl-b5pfc                   1/1     Running                 0          15s\nprovisioning-56                      hostexec-bootstrap-e2e-minion-group-5wcz-7gvz6                   1/1     Running                 0          20s\nprovisioning-8552                    hostexec-bootstrap-e2e-minion-group-n0jl-9phhz                   1/1     Running                 0          15s\nprovisioning-877                     hostexec-bootstrap-e2e-minion-group-9dh8-pkq5s                   1/1     Running                 0          17s\nreplication-controller-8648          my-hostname-private-785c01f1-1d40-45b8-a690-02dd12678a09-znsfz   1/1     Running                 0          3m11s\nsecurity-context-test-1170           busybox-readonly-true-6490a8f5-67cd-4bbc-b041-1462715d7494       0/1     Error                   0          3m28s\nsecurity-context-test-3619           busybox-privileged-false-1b8e5ee8-437e-45b5-9d4c-803929b83025    0/1     Completed               0          66s\nsecurity-context-test-8557           busybox-privileged-true-8a4b7f9a-a58f-423e-baf8-b84dfb618131     0/1     Completed               0          4m11s\nservices-2709                        execpodd55r8                                                     1/1     Running                 0          110s\nservices-2709                        externalname-service-5llhs                                       1/1     Running                 0          118s\nservices-2709                        externalname-service-zshtg                                       1/1     Running                 0          118s\nservices-316                         execpodkcwp8                                                     1/1     Running                 0          5m6s\nservices-316                         nodeport-update-service-dwtgl                                    1/1     Running                 0          5m17s\nservices-316                         nodeport-update-service-nvrbg                                    1/1     Running                 0          5m18s\nservices-425                         execpod-noendpointsn6vbj                                         1/1     Running                 0          81s\nservices-8231                        execpodzd562                                                     1/1     Running                 0          3m36s\nstatefulset-1883                     ss2-0                                                            0/1     ContainerCreating       0          4s\nsvcaccounts-7539                     pod-service-account-93dedb9a-2e60-4eaa-ac03-e1607d779af1         1/1     Running                 0          103s\nsvcaccounts-8786                     pod-service-account-defaultsa                                    0/1     Completed               0          3m14s\nsvcaccounts-8786                     pod-service-account-defaultsa-mountspec                          0/1     Completed               0          3m13s\nsvcaccounts-8786                     pod-service-account-defaultsa-nomountspec                        0/1     Completed               0          3m12s\nsvcaccounts-8786                     pod-service-account-mountsa                                      0/1     Completed               0          3m13s\nsvcaccounts-8786                     pod-service-account-mountsa-mountspec                            0/1     Completed               0          3m13s\nsvcaccounts-8786                     pod-service-account-mountsa-nomountspec                          0/1     Completed               0          3m11s\nsvcaccounts-8786                     pod-service-account-nomountsa                                    0/1     Completed               0          3m13s\nsvcaccounts-8786                     pod-service-account-nomountsa-mountspec                          0/1     Completed               0          3m12s\nsvcaccounts-8786                     pod-service-account-nomountsa-nomountspec                        0/1     Completed               0          3m11s\nsysctl-1230                          sysctl-10e71d51-578c-40d0-9647-a9b63d865cf4                      0/1     Completed               0          4m14s\ntables-8261                          pod-1                                                            1/1     Running                 0          84s\nvolume-1056                          external-provisioner-882d6                                       1/1     Running                 0          65s\nvolume-1056                          nfs-injector                                                     1/1     Running                 0          16s\nvolume-1056                          nfs-server                                                       1/1     Running                 0          48s\nvolume-2270                          gcepd-client                                                     0/1     Pending                 0          1s\nvolume-2777                          gcepd-client                                                     0/1     ContainerCreating       0          18s\nvolume-5853                          gcepd-client                                                     0/1     ContainerCreating       0          5s\nvolume-expand-2836                   security-context-6e6c869b-d4d9-4aed-b222-b8e181d64a52            1/1     Running                 0          34s\nvolumemode-4012                      csi-hostpath-attacher-0                                          1/1     Running                 0          62s\nvolumemode-4012                      csi-hostpath-provisioner-0                                       1/1     Running                 0          63s\nvolumemode-4012                      csi-hostpath-resizer-0                                           1/1     Running                 0          65s\nvolumemode-4012                      csi-hostpathplugin-0                                             3/3     Running                 0          68s\nvolumemode-4012                      csi-snapshotter-0                                                1/1     Running                 0          65s\nwebhook-3426                         sample-webhook-deployment-5f65f8c764-57ttx                       1/1     Running                 0          16s\nwebhook-6106                         webhook-to-be-mutated                                            0/1     Init:ImagePullBackOff   0          29s\n"
Jan 16 07:05:59.887: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config get replicationcontrollers --all-namespaces'
Jan 16 07:06:00.982: INFO: stderr: ""
Jan 16 07:06:00.982: INFO: stdout: "NAMESPACE                     NAME                                                       DESIRED   CURRENT   READY   AGE\napparmor-8554                 apparmor-loader                                            1         1         1       99s\ngc-2159                       simpletest-rc-to-stay                                      0         0         0       52s\nkubectl-1384                  agnhost-master                                             1         1         1       3m11s\nkubectl-3606                  rc1q2zjmwrmgm                                              1         0         0       1s\nkubectl-4683                  agnhost-master                                             1         1         1       3m32s\nkubectl-6770                  e2e-test-httpd-rc                                          1         1         1       8s\nkubectl-6770                  e2e-test-httpd-rc-b09c6c62784b14c8174e9d912ef2c451         1         1         0       3s\nkubectl-8926                  update-demo-nautilus                                       2         2         2       4m18s\nreplication-controller-8648   my-hostname-private-785c01f1-1d40-45b8-a690-02dd12678a09   1         1         1       3m13s\nservices-2709                 externalname-service                                       2         2         2       2m1s\nservices-316                  nodeport-update-service                                    2         2         2       5m21s\n"
Jan 16 07:06:01.869: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config get endpoints --all-namespaces'
Jan 16 07:06:02.644: INFO: stderr: ""
Jan 16 07:06:02.644: INFO: stdout: "NAMESPACE           NAME                                ENDPOINTS                                                        AGE\ndefault             kubernetes                          34.83.159.163:443                                                19m\nkube-system         default-http-backend                10.64.1.3:8080                                                   18m\nkube-system         kube-controller-manager             <none>                                                           19m\nkube-system         kube-dns                            10.64.1.5:53,10.64.3.2:53,10.64.1.5:53 + 3 more...               18m\nkube-system         kube-scheduler                      <none>                                                           19m\nkube-system         kubernetes-dashboard                10.64.2.2:8443                                                   18m\nkube-system         metrics-server                      10.64.2.4:443                                                    18m\nkubectl-1384        agnhost-master                      10.64.2.225:6379                                                 3m11s\nkubectl-3606        ep1nameq2zjmwrmgm                   192.168.3.1:8000                                                 1s\nnettest-1801        node-port-service                   10.64.0.213:8081,10.64.1.197:8081,10.64.2.216:8081 + 5 more...   2m53s\nnettest-1801        session-affinity-service            10.64.0.213:8081,10.64.1.197:8081,10.64.2.216:8081 + 5 more...   2m52s\nnettest-4067        node-port-service                   10.64.0.203:8081,10.64.1.188:8081,10.64.2.204:8081 + 5 more...   3m43s\nnettest-4067        session-affinity-service            10.64.0.203:8081,10.64.1.188:8081,10.64.2.204:8081 + 5 more...   3m41s\nnettest-6829        node-port-service                   10.64.0.152:8081,10.64.2.152:8081,10.64.3.139:8081 + 3 more...   7m7s\nnettest-6829        session-affinity-service            10.64.0.152:8081,10.64.2.152:8081,10.64.3.139:8081 + 3 more...   7m6s\npods-4409           fooservice                          10.64.0.238:8080                                                 2m16s\nprovisioning-114    example.com-nfs-provisioning-114    <none>                                                           3m30s\nprovisioning-4438   csi-hostpath-provisioner            10.64.1.233:12345                                                62s\nprovisioning-4438   csi-hostpath-resizer                10.64.1.230:12345                                                62s\nprovisioning-4438   csi-snapshotter                     10.64.1.231:12345                                                61s\nprovisioning-8556   example.com-nfs-provisioning-8556   <none>                                                           105s\nservices-425        no-pods                             <none>                                                           85s\nstatefulset-1883    test                                10.64.3.9:80                                                     9s\nstatefulset-3296    test                                <none>                                                           6m35s\nstatefulset-3343    test                                <none>                                                           8m14s\nstatefulset-5026    test                                <none>                                                           6m38s\nvolume-1056         example.com-nfs-volume-1056         <none>                                                           59s\nvolume-1444         example.com-nfs-volume-1444         <none>                                                           6m25s\nvolume-1513         example.com-nfs-volume-1513         <none>                                                           2m17s\nvolume-2019         example.com-nfs-volume-2019         <none>                                                           3m10s\nvolume-3991         example.com-nfs-volume-3991         <none>                                                           2m22s\nvolume-6940         example.com-nfs-volume-6940         <none>                                                           98s\nvolumemode-4012     csi-hostpath-attacher               10.64.3.242:12345                                                74s\nvolumemode-4012     csi-hostpath-provisioner            10.64.3.244:12345                                                72s\nvolumemode-4012     csi-hostpath-resizer                10.64.3.240:12345                                                71s\nvolumemode-4012     csi-hostpathplugin                  10.64.3.238:12345                                                72s\nvolumemode-4012     csi-snapshotter                     10.64.3.241:12345                                                70s\nwebhook-3426        e2e-test-webhook                    10.64.3.6:8444                                                   6s\n"
... skipping 20 lines ...
Jan 16 07:06:15.746: INFO: stdout: "NAMESPACE         NAME                       READY   UP-TO-DATE   AVAILABLE   AGE\ndeployment-2085   test-cleanup-deployment    1/1     1            1           3m11s\ndeployment-3447   test-rollover-deployment   1/1     1            1           80s\ndeployment-7737   webserver                  3/3     3            3           97s\nkube-system       coredns                    2/2     2            2           18m\nkube-system       event-exporter-v0.3.1      1/1     1            1           18m\nkube-system       fluentd-gcp-scaler         1/1     1            1           18m\nkube-system       kube-dns-autoscaler        1/1     1            1           18m\nkube-system       kubernetes-dashboard       1/1     1            1           18m\nkube-system       l7-default-backend         1/1     1            1           18m\nkube-system       metrics-server-v0.3.6      1/1     1            1           18m\nkubectl-3606      deployment4q2zjmwrmgm      0/1     0            0           1s\n"
Jan 16 07:06:16.807: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config get replicasets --all-namespaces'
Jan 16 07:06:17.440: INFO: stderr: ""
Jan 16 07:06:17.440: INFO: stdout: "NAMESPACE         NAME                                  DESIRED   CURRENT   READY   AGE\ndeployment-2085   test-cleanup-deployment-55ffc6b7b6    1         1         1       3m12s\ndeployment-3447   test-rollover-controller              0         0         0       90s\ndeployment-3447   test-rollover-deployment-574d6dfbff   1         1         1       78s\ndeployment-3447   test-rollover-deployment-f6c94f66c    0         0         0       81s\ndeployment-7737   webserver-6f4df6d875                  0         0         0       80s\ndeployment-7737   webserver-79fbcb94c6                  0         0         0       89s\ndeployment-7737   webserver-b44845bb                    3         3         3       49s\ngc-1557           simpletest.deployment-fb5f5c75d       2         2         2       2m57s\nkube-system       coredns-65567c7b57                    2         2         2       18m\nkube-system       event-exporter-v0.3.1-747b47fcd       1         1         1       18m\nkube-system       fluentd-gcp-scaler-76d9c77b4d         1         1         1       18m\nkube-system       kube-dns-autoscaler-65bc6d4889        1         1         1       18m\nkube-system       kubernetes-dashboard-7778f8b456       1         1         1       18m\nkube-system       l7-default-backend-678889f899         1         1         1       18m\nkube-system       metrics-server-v0.3.6-5f859c87d6      1         1         1       18m\nkube-system       metrics-server-v0.3.6-65d4dc878       0         0         0       18m\nkubectl-3606      rs3q2zjmwrmgm                         1         0         0       1s\n"
Jan 16 07:06:18.803: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.159.163 --kubeconfig=/workspace/.kube/config get events --all-namespaces'
Jan 16 07:06:25.550: INFO: stderr: ""
Jan 16 07:06:25.550: INFO: stdout: "NAMESPACE                            LAST SEEN   TYPE      REASON                               OBJECT                                                                           MESSAGE\napparmor-8554                        117s        Normal    Scheduled                            pod/apparmor-loader-t4qxm                                                        Successfully assigned apparmor-8554/apparmor-loader-t4qxm to bootstrap-e2e-minion-group-n0jl\napparmor-8554                        113s        Normal    Pulling                              pod/apparmor-loader-t4qxm                                                        Pulling image \"gcr.io/kubernetes-e2e-test-images/apparmor-loader:1.0\"\napparmor-8554                        105s        Normal    Pulled                               pod/apparmor-loader-t4qxm                                                        Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/apparmor-loader:1.0\"\napparmor-8554                        104s        Normal    Created                              pod/apparmor-loader-t4qxm                                                        Created container apparmor-loader\napparmor-8554                        103s        Normal    Started                              pod/apparmor-loader-t4qxm                                                        Started container apparmor-loader\napparmor-8554                        118s        Normal    SuccessfulCreate                     replicationcontroller/apparmor-loader                                            Created pod: apparmor-loader-t4qxm\napparmor-8554                        99s         Normal    Scheduled                            pod/test-apparmor-b4qp7                                                          Successfully assigned apparmor-8554/test-apparmor-b4qp7 to bootstrap-e2e-minion-group-n0jl\napparmor-8554                        95s         Normal    Pulled                               pod/test-apparmor-b4qp7                                                          Container image \"docker.io/library/busybox:1.29\" already present on machine\napparmor-8554                        95s         Normal    Created                              pod/test-apparmor-b4qp7                                                          Created container test\napparmor-8554                        93s         Normal    Started                              pod/test-apparmor-b4qp7                                                          Started container test\nclientset-1128                       43s         Normal    Scheduled                            pod/podf45b47e6-b019-4d0f-b47b-8dd209ff5ac3                                      Successfully assigned clientset-1128/podf45b47e6-b019-4d0f-b47b-8dd209ff5ac3 to bootstrap-e2e-minion-group-9dh8\nclientset-1128                       42s         Normal    Pulled                               pod/podf45b47e6-b019-4d0f-b47b-8dd209ff5ac3                                      Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\nclientset-1128                       42s         Normal    Created                              pod/podf45b47e6-b019-4d0f-b47b-8dd209ff5ac3                                      Created container nginx\nclientset-1128                       39s         Normal    Started                              pod/podf45b47e6-b019-4d0f-b47b-8dd209ff5ac3                                      Started container nginx\nconfigmap-5408                       58s         Normal    Scheduled                            pod/pod-configmaps-e6bc2983-8be2-4435-a9d2-38457e9250fd                          Successfully assigned configmap-5408/pod-configmaps-e6bc2983-8be2-4435-a9d2-38457e9250fd to bootstrap-e2e-minion-group-n0jl\nconfigmap-5408                       54s         Normal    Pulled                               pod/pod-configmaps-e6bc2983-8be2-4435-a9d2-38457e9250fd                          Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nconfigmap-5408                       54s         Normal    Created                              pod/pod-configmaps-e6bc2983-8be2-4435-a9d2-38457e9250fd                          Created container configmap-volume-test\nconfigmap-5408                       53s         Normal    Started                              pod/pod-configmaps-e6bc2983-8be2-4435-a9d2-38457e9250fd                          Started container configmap-volume-test\nconfigmap-5446                       2m39s       Normal    Scheduled                            pod/pod-configmaps-4bbb0f67-6d94-41fe-8718-3d53dc092652                          Successfully assigned configmap-5446/pod-configmaps-4bbb0f67-6d94-41fe-8718-3d53dc092652 to bootstrap-e2e-minion-group-n0jl\nconfigmap-5446                       2m37s       Warning   FailedMount                          pod/pod-configmaps-4bbb0f67-6d94-41fe-8718-3d53dc092652                          MountVolume.SetUp failed for volume \"default-token-x7f7s\" : failed to sync secret cache: timed out waiting for the condition\nconfigmap-5446                       2m37s       Warning   FailedMount                          pod/pod-configmaps-4bbb0f67-6d94-41fe-8718-3d53dc092652                          MountVolume.SetUp failed for volume \"configmap-volume\" : failed to sync configmap cache: timed out waiting for the condition\nconfigmap-5446                       2m35s       Normal    Pulled                               pod/pod-configmaps-4bbb0f67-6d94-41fe-8718-3d53dc092652                          Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nconfigmap-5446                       2m35s       Normal    Created                              pod/pod-configmaps-4bbb0f67-6d94-41fe-8718-3d53dc092652                          Created container configmap-volume-test\nconfigmap-5446                       2m35s       Normal    Started                              pod/pod-configmaps-4bbb0f67-6d94-41fe-8718-3d53dc092652                          Started container configmap-volume-test\nconfigmap-7699                       115s        Normal    Scheduled                            pod/pod-configmaps-849c1cbb-25f8-4a4c-bc06-254533510d00                          Successfully assigned configmap-7699/pod-configmaps-849c1cbb-25f8-4a4c-bc06-254533510d00 to bootstrap-e2e-minion-group-n0jl\nconfigmap-7699                       111s        Normal    Pulled                               pod/pod-configmaps-849c1cbb-25f8-4a4c-bc06-254533510d00                          Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nconfigmap-7699                       110s        Normal    Created                              pod/pod-configmaps-849c1cbb-25f8-4a4c-bc06-254533510d00                          Created container configmap-volume-test\nconfigmap-7699                       109s        Normal    Started                              pod/pod-configmaps-849c1cbb-25f8-4a4c-bc06-254533510d00                          Started container configmap-volume-test\nconfigmap-8308                       2m9s        Normal    Scheduled                            pod/pod-configmaps-dc4a10b4-d045-4233-8dbb-adbdd13ce6ce                          Successfully assigned configmap-8308/pod-configmaps-dc4a10b4-d045-4233-8dbb-adbdd13ce6ce to bootstrap-e2e-minion-group-9dh8\nconfigmap-8308                       2m4s        Normal    Pulled                               pod/pod-configmaps-dc4a10b4-d045-4233-8dbb-adbdd13ce6ce                          Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nconfigmap-8308                       2m4s        Normal    Created                              pod/pod-configmaps-dc4a10b4-d045-4233-8dbb-adbdd13ce6ce                          Created container configmap-volume-test\nconfigmap-8308                       2m3s        Normal    Started                              pod/pod-configmaps-dc4a10b4-d045-4233-8dbb-adbdd13ce6ce                          Started container configmap-volume-test\ncontainer-lifecycle-hook-9891        69s         Normal    Scheduled                            pod/pod-handle-http-request                                                      Successfully assigned container-lifecycle-hook-9891/pod-handle-http-request to bootstrap-e2e-minion-group-9dh8\ncontainer-lifecycle-hook-9891        62s         Normal    Pulled                               pod/pod-handle-http-request                                                      Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ncontainer-lifecycle-hook-9891        61s         Normal    Created                              pod/pod-handle-http-request                                                      Created container pod-handle-http-request\ncontainer-lifecycle-hook-9891        59s         Normal    Started                              pod/pod-handle-http-request                                                      Started container pod-handle-http-request\ncontainer-lifecycle-hook-9891        52s         Normal    Scheduled                            pod/pod-with-prestop-http-hook                                                   Successfully assigned container-lifecycle-hook-9891/pod-with-prestop-http-hook to bootstrap-e2e-minion-group-n0jl\ncontainer-lifecycle-hook-9891        48s         Normal    Pulled                               pod/pod-with-prestop-http-hook                                                   Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncontainer-lifecycle-hook-9891        48s         Normal    Created                              pod/pod-with-prestop-http-hook                                                   Created container pod-with-prestop-http-hook\ncontainer-lifecycle-hook-9891        47s         Normal    Started                              pod/pod-with-prestop-http-hook                                                   Started container pod-with-prestop-http-hook\ncontainer-lifecycle-hook-9891        43s         Normal    Killing                              pod/pod-with-prestop-http-hook                                                   Stopping container pod-with-prestop-http-hook\ncontainer-runtime-6893               3m20s       Normal    Scheduled                            pod/terminate-cmd-rpa9a68b078-fbd8-4715-86b8-b3de29447cf9                        Successfully assigned container-runtime-6893/terminate-cmd-rpa9a68b078-fbd8-4715-86b8-b3de29447cf9 to bootstrap-e2e-minion-group-mnwl\ncontainer-runtime-6893               3m4s        Normal    Pulled                               pod/terminate-cmd-rpa9a68b078-fbd8-4715-86b8-b3de29447cf9                        Container image \"docker.io/library/busybox:1.29\" already present on machine\ncontainer-runtime-6893               3m4s        Normal    Created                              pod/terminate-cmd-rpa9a68b078-fbd8-4715-86b8-b3de29447cf9                        Created container terminate-cmd-rpa\ncontainer-runtime-6893               3m3s        Normal    Started                              pod/terminate-cmd-rpa9a68b078-fbd8-4715-86b8-b3de29447cf9                        Started container terminate-cmd-rpa\ncontainer-runtime-6893               3m15s       Warning   BackOff                              pod/terminate-cmd-rpa9a68b078-fbd8-4715-86b8-b3de29447cf9                        Back-off restarting failed container\ncontainer-runtime-6893               2m59s       Normal    Killing                              pod/terminate-cmd-rpa9a68b078-fbd8-4715-86b8-b3de29447cf9                        Stopping container terminate-cmd-rpa\ncontainer-runtime-6893               2m44s       Normal    Scheduled                            pod/terminate-cmd-rpn2fcfa5e3-a758-4ad6-8a34-27d91617956f                        Successfully assigned container-runtime-6893/terminate-cmd-rpn2fcfa5e3-a758-4ad6-8a34-27d91617956f to bootstrap-e2e-minion-group-n0jl\ncontainer-runtime-6893               2m43s       Warning   FailedMount                          pod/terminate-cmd-rpn2fcfa5e3-a758-4ad6-8a34-27d91617956f                        MountVolume.SetUp failed for volume \"default-token-k6zwv\" : failed to sync secret cache: timed out waiting for the condition\ncontainer-runtime-6893               2m42s       Normal    Pulled                               pod/terminate-cmd-rpn2fcfa5e3-a758-4ad6-8a34-27d91617956f                        Container image \"docker.io/library/busybox:1.29\" already present on machine\ncontainer-runtime-6893               2m42s       Normal    Created                              pod/terminate-cmd-rpn2fcfa5e3-a758-4ad6-8a34-27d91617956f                        Created container terminate-cmd-rpn\ncontainer-runtime-6893               2m41s       Normal    Started                              pod/terminate-cmd-rpn2fcfa5e3-a758-4ad6-8a34-27d91617956f                        Started container terminate-cmd-rpn\ncontainer-runtime-6893               2m57s       Normal    Scheduled                            pod/terminate-cmd-rpof64e70ecf-d112-49b5-8d40-b2c6614532ff                       Successfully assigned container-runtime-6893/terminate-cmd-rpof64e70ecf-d112-49b5-8d40-b2c6614532ff to bootstrap-e2e-minion-group-n0jl\ncontainer-runtime-6893               2m56s       Warning   FailedMount                          pod/terminate-cmd-rpof64e70ecf-d112-49b5-8d40-b2c6614532ff                       MountVolume.SetUp failed for volume \"default-token-k6zwv\" : failed to sync secret cache: timed out waiting for the condition\ncontainer-runtime-6893               2m53s       Normal    Pulled                               pod/terminate-cmd-rpof64e70ecf-d112-49b5-8d40-b2c6614532ff                       Container image \"docker.io/library/busybox:1.29\" already present on machine\ncontainer-runtime-6893               2m53s       Normal    Created                              pod/terminate-cmd-rpof64e70ecf-d112-49b5-8d40-b2c6614532ff                       Created container terminate-cmd-rpof\ncontainer-runtime-6893               2m53s       Normal    Started                              pod/terminate-cmd-rpof64e70ecf-d112-49b5-8d40-b2c6614532ff                       Started container terminate-cmd-rpof\ncontainer-runtime-9188               42s         Normal    Scheduled                            pod/image-pull-test28101321-6373-4035-b2dc-d2dba5fccd4c                          Successfully assigned container-runtime-9188/image-pull-test28101321-6373-4035-b2dc-d2dba5fccd4c to bootstrap-e2e-minion-group-9dh8\ncontainer-runtime-9188               35s         Normal    Pulling                              pod/image-pull-test28101321-6373-4035-b2dc-d2dba5fccd4c                          Pulling image \"invalid.com/invalid/alpine:3.1\"\ncontainer-runtime-9188               35s         Warning   Failed                               pod/image-pull-test28101321-6373-4035-b2dc-d2dba5fccd4c                          Failed to pull image \"invalid.com/invalid/alpine:3.1\": rpc error: code = Unknown desc = Error response from daemon: Get https://invalid.com/v2/: remote error: tls: handshake failure\ncontainer-runtime-9188               35s         Warning   Failed                               pod/image-pull-test28101321-6373-4035-b2dc-d2dba5fccd4c                          Error: ErrImagePull\ncontainer-runtime-9188               31s         Normal    BackOff                              pod/image-pull-test28101321-6373-4035-b2dc-d2dba5fccd4c                          Back-off pulling image \"invalid.com/invalid/alpine:3.1\"\ncontainer-runtime-9188               31s         Warning   Failed                               pod/image-pull-test28101321-6373-4035-b2dc-d2dba5fccd4c                          Error: ImagePullBackOff\ncontainers-1118                      41s         Normal    Scheduled                            pod/client-containers-d6015634-b07f-4179-87c6-6106f95609cc                       Successfully assigned containers-1118/client-containers-d6015634-b07f-4179-87c6-6106f95609cc to bootstrap-e2e-minion-group-9dh8\ncontainers-1118                      36s         Normal    Pulled                               pod/client-containers-d6015634-b07f-4179-87c6-6106f95609cc                       Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ncontainers-1118                      35s         Normal    Created                              pod/client-containers-d6015634-b07f-4179-87c6-6106f95609cc                       Created container test-container\ncontainers-1118                      34s         Normal    Started                              pod/client-containers-d6015634-b07f-4179-87c6-6106f95609cc                       Started container test-container\ncontainers-3208                      3m21s       Normal    Scheduled                            pod/client-containers-936ea112-07d2-4de2-83c7-04fcff36b9f3                       Successfully assigned containers-3208/client-containers-936ea112-07d2-4de2-83c7-04fcff36b9f3 to bootstrap-e2e-minion-group-5wcz\ncontainers-3208                      3m19s       Normal    Pulled                               pod/client-containers-936ea112-07d2-4de2-83c7-04fcff36b9f3                       Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ncontainers-3208                      3m18s       Normal    Created                              pod/client-containers-936ea112-07d2-4de2-83c7-04fcff36b9f3                       Created container test-container\ncontainers-3208                      3m18s       Normal    Started                              pod/client-containers-936ea112-07d2-4de2-83c7-04fcff36b9f3                       Started container test-container\ncontainers-8454                      3m52s       Normal    Scheduled                            pod/client-containers-68f59b19-7a38-4750-b72a-6e9a60c8d172                       Successfully assigned containers-8454/client-containers-68f59b19-7a38-4750-b72a-6e9a60c8d172 to bootstrap-e2e-minion-group-9dh8\ncontainers-8454                      3m50s       Normal    Pulled                               pod/client-containers-68f59b19-7a38-4750-b72a-6e9a60c8d172                       Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ncontainers-8454                      3m50s       Normal    Created                              pod/client-containers-68f59b19-7a38-4750-b72a-6e9a60c8d172                       Created container test-container\ncontainers-8454                      3m49s       Normal    Started                              pod/client-containers-68f59b19-7a38-4750-b72a-6e9a60c8d172                       Started container test-container\ncrd-webhook-3623                     4m2s        Normal    Scheduled                            pod/sample-crd-conversion-webhook-deployment-78dcf5dd84-lwlqw                    Successfully assigned crd-webhook-3623/sample-crd-conversion-webhook-deployment-78dcf5dd84-lwlqw to bootstrap-e2e-minion-group-mnwl\ncrd-webhook-3623                     4m1s        Normal    Pulled                               pod/sample-crd-conversion-webhook-deployment-78dcf5dd84-lwlqw                    Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ncrd-webhook-3623                     4m1s        Normal    Created                              pod/sample-crd-conversion-webhook-deployment-78dcf5dd84-lwlqw                    Created container sample-crd-conversion-webhook\ncrd-webhook-3623                     4m          Normal    Started                              pod/sample-crd-conversion-webhook-deployment-78dcf5dd84-lwlqw                    Started container sample-crd-conversion-webhook\ncrd-webhook-3623                     4m3s        Normal    SuccessfulCreate                     replicaset/sample-crd-conversion-webhook-deployment-78dcf5dd84                   Created pod: sample-crd-conversion-webhook-deployment-78dcf5dd84-lwlqw\ncrd-webhook-3623                     4m3s        Normal    ScalingReplicaSet                    deployment/sample-crd-conversion-webhook-deployment                              Scaled up replica set sample-crd-conversion-webhook-deployment-78dcf5dd84 to 1\ncronjob-3956                         2m13s       Normal    Scheduled                            pod/successful-jobs-history-limit-1579158240-bd8fp                               Successfully assigned cronjob-3956/successful-jobs-history-limit-1579158240-bd8fp to bootstrap-e2e-minion-group-9dh8\ncronjob-3956                         2m8s        Normal    Pulled                               pod/successful-jobs-history-limit-1579158240-bd8fp                               Container image \"docker.io/library/busybox:1.29\" already present on machine\ncronjob-3956                         2m8s        Normal    Created                              pod/successful-jobs-history-limit-1579158240-bd8fp                               Created container c\ncronjob-3956                         2m6s        Normal    Started                              pod/successful-jobs-history-limit-1579158240-bd8fp                               Started container c\ncronjob-3956                         2m14s       Normal    SuccessfulCreate                     job/successful-jobs-history-limit-1579158240                                     Created pod: successful-jobs-history-limit-1579158240-bd8fp\ncronjob-3956                         2m3s        Normal    Completed                            job/successful-jobs-history-limit-1579158240                                     Job completed\ncronjob-3956                         77s         Normal    Scheduled                            pod/successful-jobs-history-limit-1579158300-2nntd                               Successfully assigned cronjob-3956/successful-jobs-history-limit-1579158300-2nntd to bootstrap-e2e-minion-group-9dh8\ncronjob-3956                         69s         Normal    Pulled                               pod/successful-jobs-history-limit-1579158300-2nntd                               Container image \"docker.io/library/busybox:1.29\" already present on machine\ncronjob-3956                         68s         Normal    Created                              pod/successful-jobs-history-limit-1579158300-2nntd                               Created container c\ncronjob-3956                         64s         Normal    Started                              pod/successful-jobs-history-limit-1579158300-2nntd                               Started container c\ncronjob-3956                         77s         Normal    SuccessfulCreate                     job/successful-jobs-history-limit-1579158300                                     Created pod: successful-jobs-history-limit-1579158300-2nntd\ncronjob-3956                         57s         Normal    Completed                            job/successful-jobs-history-limit-1579158300                                     Job completed\ncronjob-3956                         2m14s       Normal    SuccessfulCreate                     cronjob/successful-jobs-history-limit                                            Created job successful-jobs-history-limit-1579158240\ncronjob-3956                         113s        Normal    SawCompletedJob                      cronjob/successful-jobs-history-limit                                            Saw completed job: successful-jobs-history-limit-1579158240, status: Complete\ncronjob-3956                         78s         Normal    SuccessfulCreate                     cronjob/successful-jobs-history-limit                                            Created job successful-jobs-history-limit-1579158300\ncronjob-3956                         54s         Normal    SawCompletedJob                      cronjob/successful-jobs-history-limit                                            Saw completed job: successful-jobs-history-limit-1579158300, status: Complete\ncronjob-3956                         53s         Normal    SuccessfulDelete                     cronjob/successful-jobs-history-limit                                            Deleted job successful-jobs-history-limit-1579158240\ncronjob-4939                         76s         Normal    Scheduled                            pod/failed-jobs-history-limit-1579158300-jbg2j                                   Successfully assigned cronjob-4939/failed-jobs-history-limit-1579158300-jbg2j to bootstrap-e2e-minion-group-9dh8\ncronjob-4939                         60s         Normal    Pulled                               pod/failed-jobs-history-limit-1579158300-jbg2j                                   Container image \"docker.io/library/busybox:1.29\" already present on machine\ncronjob-4939                         59s         Normal    Created                              pod/failed-jobs-history-limit-1579158300-jbg2j                                   Created container c\ncronjob-4939                         58s         Normal    Started                              pod/failed-jobs-history-limit-1579158300-jbg2j                                   Started container c\ncronjob-4939                         51s         Warning   BackOff                              pod/failed-jobs-history-limit-1579158300-jbg2j                                   Back-off restarting failed container\ncronjob-4939                         76s         Normal    SuccessfulCreate                     job/failed-jobs-history-limit-1579158300                                         Created pod: failed-jobs-history-limit-1579158300-jbg2j\ncronjob-4939                         50s         Normal    SuccessfulDelete                     job/failed-jobs-history-limit-1579158300                                         Deleted pod: failed-jobs-history-limit-1579158300-jbg2j\ncronjob-4939                         50s         Warning   BackoffLimitExceeded                 job/failed-jobs-history-limit-1579158300                                         Job has reached the specified backoff limit\ncronjob-4939                         17s         Normal    Scheduled                            pod/failed-jobs-history-limit-1579158360-2jq4g                                   Successfully assigned cronjob-4939/failed-jobs-history-limit-1579158360-2jq4g to bootstrap-e2e-minion-group-n0jl\ncronjob-4939                         14s         Normal    Pulled                               pod/failed-jobs-history-limit-1579158360-2jq4g                                   Container image \"docker.io/library/busybox:1.29\" already present on machine\ncronjob-4939                         14s         Normal    Created                              pod/failed-jobs-history-limit-1579158360-2jq4g                                   Created container c\ncronjob-4939                         13s         Normal    Started                              pod/failed-jobs-history-limit-1579158360-2jq4g                                   Started container c\ncronjob-4939                         11s         Warning   BackOff                              pod/failed-jobs-history-limit-1579158360-2jq4g                                   Back-off restarting failed container\ncronjob-4939                         18s         Normal    SuccessfulCreate                     job/failed-jobs-history-limit-1579158360                                         Created pod: failed-jobs-history-limit-1579158360-2jq4g\ncronjob-4939                         9s          Normal    SuccessfulDelete                     job/failed-jobs-history-limit-1579158360                                         Deleted pod: failed-jobs-history-limit-1579158360-2jq4g\ncronjob-4939                         9s          Warning   BackoffLimitExceeded                 job/failed-jobs-history-limit-1579158360                                         Job has reached the specified backoff limit\ncronjob-4939                         77s         Normal    SuccessfulCreate                     cronjob/failed-jobs-history-limit                                                Created job failed-jobs-history-limit-1579158300\ncronjob-4939                         42s         Normal    SawCompletedJob                      cronjob/failed-jobs-history-limit                                                Saw completed job: failed-jobs-history-limit-1579158300, status: Failed\ncronjob-4939                         18s         Normal    SuccessfulCreate                     cronjob/failed-jobs-history-limit                                                Created job failed-jobs-history-limit-1579158360\ncronjob-4939                         7s          Normal    SawCompletedJob                      cronjob/failed-jobs-history-limit                                                Saw completed job: failed-jobs-history-limit-1579158360, status: Failed\ncronjob-4939                         6s          Normal    SuccessfulDelete                     cronjob/failed-jobs-history-limit                                                Deleted job failed-jobs-history-limit-1579158300\ncsi-mock-volumes-2747                2m14s       Warning   FailedMount                          pod/csi-mockplugin-0                                                             MountVolume.SetUp failed for volume \"csi-mock-token-zkw8k\" : failed to sync secret cache: timed out waiting for the condition\ncsi-mock-volumes-2747                2m12s       Normal    Pulled                               pod/csi-mockplugin-0                                                             Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\ncsi-mock-volumes-2747                2m12s       Normal    Created                              pod/csi-mockplugin-0                                                             Created container csi-provisioner\ncsi-mock-volumes-2747                2m10s       Normal    Started                              pod/csi-mockplugin-0                                                             Started container csi-provisioner\ncsi-mock-volumes-2747                2m10s       Normal    Pulled                               pod/csi-mockplugin-0                                                             Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\ncsi-mock-volumes-2747                2m10s       Normal    Created                              pod/csi-mockplugin-0                                                             Created container driver-registrar\ncsi-mock-volumes-2747                2m10s       Normal    Started                              pod/csi-mockplugin-0                                                             Started container driver-registrar\ncsi-mock-volumes-2747                2m10s       Normal    Pulled                               pod/csi-mockplugin-0                                                             Container image \"quay.io/k8scsi/mock-driver:v2.1.0\" already present on machine\ncsi-mock-volumes-2747                2m9s        Normal    Created                              pod/csi-mockplugin-0                                                             Created container mock\ncsi-mock-volumes-2747                2m9s        Normal    Started                              pod/csi-mockplugin-0                                                             Started container mock\ncsi-mock-volumes-2747                4s          Normal    Killing                              pod/csi-mockplugin-0                                                             Stopping container csi-provisioner\ncsi-mock-volumes-2747                4s          Normal    Killing                              pod/csi-mockplugin-0                                                             Stopping container mock\ncsi-mock-volumes-2747                4s          Normal    Killing                              pod/csi-mockplugin-0                                                             Stopping container driver-registrar\ncsi-mock-volumes-2747                2m13s       Normal    Pulled                               pod/csi-mockplugin-resizer-0                                                     Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\ncsi-mock-volumes-2747                2m13s       Normal    Created                              pod/csi-mockplugin-resizer-0                                                     Created container csi-resizer\ncsi-mock-volumes-2747                2m12s       Normal    Started                              pod/csi-mockplugin-resizer-0                                                     Started container csi-resizer\ncsi-mock-volumes-2747                2s          Normal    Killing                              pod/csi-mockplugin-resizer-0                                                     Stopping container csi-resizer\ncsi-mock-volumes-2747                2m15s       Normal    SuccessfulCreate                     statefulset/csi-mockplugin-resizer                                               create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\ncsi-mock-volumes-2747                2m15s       Normal    SuccessfulCreate                     statefulset/csi-mockplugin                                                       create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-2747                2m14s       Normal    ExternalProvisioning                 persistentvolumeclaim/pvc-74r6k                                                  waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-2747\" or manually created by system administrator\ncsi-mock-volumes-2747                2m8s        Normal    Provisioning                         persistentvolumeclaim/pvc-74r6k                                                  External provisioner is provisioning volume for claim \"csi-mock-volumes-2747/pvc-74r6k\"\ncsi-mock-volumes-2747                2m8s        Normal    ProvisioningSucceeded                persistentvolumeclaim/pvc-74r6k                                                  Successfully provisioned volume pvc-4dc9b20d-944f-466e-b136-2ccaf19b08c8\ncsi-mock-volumes-2747                2m          Warning   ExternalExpanding                    persistentvolumeclaim/pvc-74r6k                                                  Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\ncsi-mock-volumes-2747                119s        Normal    Resizing                             persistentvolumeclaim/pvc-74r6k                                                  External resizer is resizing volume pvc-4dc9b20d-944f-466e-b136-2ccaf19b08c8\ncsi-mock-volumes-2747                119s        Normal    FileSystemResizeRequired             persistentvolumeclaim/pvc-74r6k                                                  Require file system resize of volume on node\ncsi-mock-volumes-2747                36s         Normal    FileSystemResizeSuccessful           persistentvolumeclaim/pvc-74r6k                                                  MountVolume.NodeExpandVolume succeeded for volume \"pvc-4dc9b20d-944f-466e-b136-2ccaf19b08c8\"\ncsi-mock-volumes-2747                2m3s        Normal    Pulled                               pod/pvc-volume-tester-dh6dx                                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-2747                2m3s        Normal    Created                              pod/pvc-volume-tester-dh6dx                                                      Created container volume-tester\ncsi-mock-volumes-2747                2m3s        Normal    Started                              pod/pvc-volume-tester-dh6dx                                                      Started container volume-tester\ncsi-mock-volumes-2747                36s         Normal    FileSystemResizeSuccessful           pod/pvc-volume-tester-dh6dx                                                      MountVolume.NodeExpandVolume succeeded for volume \"pvc-4dc9b20d-944f-466e-b136-2ccaf19b08c8\"\ncsi-mock-volumes-2747                33s         Normal    Killing                              pod/pvc-volume-tester-dh6dx                                                      Stopping container volume-tester\ncsi-mock-volumes-4068                7m10s       Normal    Pulled                               pod/csi-mockplugin-0                                                             Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\ncsi-mock-volumes-4068                7m10s       Normal    Created                              pod/csi-mockplugin-0                                                             Created container csi-provisioner\ncsi-mock-volumes-4068                7m9s        Normal    Started                              pod/csi-mockplugin-0                                                             Started container csi-provisioner\ncsi-mock-volumes-4068                7m9s        Normal    Pulled                               pod/csi-mockplugin-0                                                             Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\ncsi-mock-volumes-4068                7m9s        Normal    Created                              pod/csi-mockplugin-0                                                             Created container driver-registrar\ncsi-mock-volumes-4068                7m8s        Normal    Started                              pod/csi-mockplugin-0                                                             Started container driver-registrar\ncsi-mock-volumes-4068                7m8s        Normal    Pulling                              pod/csi-mockplugin-0                                                             Pulling image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-4068                7m3s        Normal    Pulled                               pod/csi-mockplugin-0                                                             Successfully pulled image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-4068                7m3s        Normal    Created                              pod/csi-mockplugin-0                                                             Created container mock\ncsi-mock-volumes-4068                7m2s        Normal    Started                              pod/csi-mockplugin-0                                                             Started container mock\ncsi-mock-volumes-4068                3m59s       Normal    Killing                              pod/csi-mockplugin-0                                                             Stopping container csi-provisioner\ncsi-mock-volumes-4068                3m59s       Normal    Killing                              pod/csi-mockplugin-0                                                             Stopping container driver-registrar\ncsi-mock-volumes-4068                7m10s       Normal    Pulled                               pod/csi-mockplugin-attacher-0                                                    Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\ncsi-mock-volumes-4068                7m10s       Normal    Created                              pod/csi-mockplugin-attacher-0                                                    Created container csi-attacher\ncsi-mock-volumes-4068                7m10s       Normal    Started                              pod/csi-mockplugin-attacher-0                                                    Started container csi-attacher\ncsi-mock-volumes-4068                7m16s       Normal    SuccessfulCreate                     statefulset/csi-mockplugin-attacher                                              create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-4068                7m16s       Normal    SuccessfulCreate                     statefulset/csi-mockplugin                                                       create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-4068                7m2s        Normal    ExternalProvisioning                 persistentvolumeclaim/pvc-n6ntq                                                  waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-4068\" or manually created by system administrator\ncsi-mock-volumes-4068                7m1s        Normal    Provisioning                         persistentvolumeclaim/pvc-n6ntq                                                  External provisioner is provisioning volume for claim \"csi-mock-volumes-4068/pvc-n6ntq\"\ncsi-mock-volumes-4068                7m1s        Normal    ProvisioningSucceeded                persistentvolumeclaim/pvc-n6ntq                                                  Successfully provisioned volume pvc-f0716a12-286d-470b-9250-06f9baa26ea7\ncsi-mock-volumes-4068                6m41s       Warning   ExternalExpanding                    persistentvolumeclaim/pvc-n6ntq                                                  Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\ncsi-mock-volumes-4068                6m57s       Normal    SuccessfulAttachVolume               pod/pvc-volume-tester-bllcn                                                      AttachVolume.Attach succeeded for volume \"pvc-f0716a12-286d-470b-9250-06f9baa26ea7\"\ncsi-mock-volumes-4068                6m51s       Normal    Pulled                               pod/pvc-volume-tester-bllcn                                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-4068                6m51s       Normal    Created                              pod/pvc-volume-tester-bllcn                                                      Created container volume-tester\ncsi-mock-volumes-4068                6m49s       Normal    Started                              pod/pvc-volume-tester-bllcn                                                      Started container volume-tester\ncsi-mock-volumes-4068                4m39s       Normal    Killing                              pod/pvc-volume-tester-bllcn                                                      Stopping container volume-tester\ncsi-mock-volumes-5993                5m55s       Normal    Pulled                               pod/csi-mockplugin-0                                                             Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\ncsi-mock-volumes-5993                5m55s       Normal    Created                              pod/csi-mockplugin-0                                                             Created container csi-provisioner\ncsi-mock-volumes-5993                5m53s       Normal    Started                              pod/csi-mockplugin-0                                                             Started container csi-provisioner\ncsi-mock-volumes-5993                5m53s       Normal    Pulled                               pod/csi-mockplugin-0                                                             Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\ncsi-mock-volumes-5993                5m53s       Normal    Created                              pod/csi-mockplugin-0                                                             Created container driver-registrar\ncsi-mock-volumes-5993                5m51s       Normal    Started                              pod/csi-mockplugin-0                                                             Started container driver-registrar\ncsi-mock-volumes-5993                5m51s       Normal    Pulled                               pod/csi-mockplugin-0                                                             Container image \"quay.io/k8scsi/mock-driver:v2.1.0\" already present on machine\ncsi-mock-volumes-5993                5m51s       Normal    Created                              pod/csi-mockplugin-0                                                             Created container mock\ncsi-mock-volumes-5993                5m50s       Normal    Started                              pod/csi-mockplugin-0                                                             Started container mock\ncsi-mock-volumes-5993                2m50s       Normal    Killing                              pod/csi-mockplugin-0                                                             Stopping container mock\ncsi-mock-volumes-5993                5m55s       Normal    Pulled                               pod/csi-mockplugin-attacher-0                                                    Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\ncsi-mock-volumes-5993                5m55s       Normal    Created                              pod/csi-mockplugin-attacher-0                                                    Created container csi-attacher\ncsi-mock-volumes-5993                5m54s       Normal    Started                              pod/csi-mockplugin-attacher-0                                                    Started container csi-attacher\ncsi-mock-volumes-5993                6m          Normal    SuccessfulCreate                     statefulset/csi-mockplugin-attacher                                              create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-5993                6m          Normal    SuccessfulCreate                     statefulset/csi-mockplugin                                                       create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-5993                5m59s       Normal    ExternalProvisioning                 persistentvolumeclaim/pvc-jpsqr                                                  waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-5993\" or manually created by system administrator\ncsi-mock-volumes-5993                5m49s       Normal    Provisioning                         persistentvolumeclaim/pvc-jpsqr                                                  External provisioner is provisioning volume for claim \"csi-mock-volumes-5993/pvc-jpsqr\"\ncsi-mock-volumes-5993                5m48s       Normal    ProvisioningSucceeded                persistentvolumeclaim/pvc-jpsqr                                                  Successfully provisioned volume pvc-64d5cb78-959d-4001-928c-f98478b38946\ncsi-mock-volumes-5993                5m44s       Normal    SuccessfulAttachVolume               pod/pvc-volume-tester-jvflx                                                      AttachVolume.Attach succeeded for volume \"pvc-64d5cb78-959d-4001-928c-f98478b38946\"\ncsi-mock-volumes-5993                5m33s       Normal    Pulled                               pod/pvc-volume-tester-jvflx                                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-5993                5m32s       Normal    Created                              pod/pvc-volume-tester-jvflx                                                      Created container volume-tester\ncsi-mock-volumes-5993                5m31s       Normal    Started                              pod/pvc-volume-tester-jvflx                                                      Started container volume-tester\ncsi-mock-volumes-5993                5m22s       Normal    Killing                              pod/pvc-volume-tester-jvflx                                                      Stopping container volume-tester\ncsi-mock-volumes-8708                27s         Normal    Pulled                               pod/csi-mockplugin-0                                                             Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\ncsi-mock-volumes-8708                26s         Normal    Created                              pod/csi-mockplugin-0                                                             Created container csi-provisioner\ncsi-mock-volumes-8708                25s         Normal    Started                              pod/csi-mockplugin-0                                                             Started container csi-provisioner\ncsi-mock-volumes-8708                25s         Normal    Pulled                               pod/csi-mockplugin-0                                                             Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\ncsi-mock-volumes-8708                25s         Normal    Created                              pod/csi-mockplugin-0                                                             Created container driver-registrar\ncsi-mock-volumes-8708                24s         Normal    Started                              pod/csi-mockplugin-0                                                             Started container driver-registrar\ncsi-mock-volumes-8708                24s         Normal    Pulled                               pod/csi-mockplugin-0                                                             Container image \"quay.io/k8scsi/mock-driver:v2.1.0\" already present on machine\ncsi-mock-volumes-8708                24s         Normal    Created                              pod/csi-mockplugin-0                                                             Created container mock\ncsi-mock-volumes-8708                24s         Normal    Started                              pod/csi-mockplugin-0                                                             Started container mock\ncsi-mock-volumes-8708                27s         Normal    Pulled                               pod/csi-mockplugin-attacher-0                                                    Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\ncsi-mock-volumes-8708                27s         Normal    Created                              pod/csi-mockplugin-attacher-0                                                    Created container csi-attacher\ncsi-mock-volumes-8708                25s         Normal    Started                              pod/csi-mockplugin-attacher-0                                                    Started container csi-attacher\ncsi-mock-volumes-8708                28s         Normal    SuccessfulCreate                     statefulset/csi-mockplugin-attacher                                              create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-8708                25s         Normal    Pulled                               pod/csi-mockplugin-resizer-0                                                     Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\ncsi-mock-volumes-8708                25s         Normal    Created                              pod/csi-mockplugin-resizer-0                                                     Created container csi-resizer\ncsi-mock-volumes-8708                24s         Normal    Started                              pod/csi-mockplugin-resizer-0                                                     Started container csi-resizer\ncsi-mock-volumes-8708                28s         Normal    SuccessfulCreate                     statefulset/csi-mockplugin-resizer                                               create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\ncsi-mock-volumes-8708                28s         Normal    SuccessfulCreate                     statefulset/csi-mockplugin                                                       create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-8708                27s         Normal    ExternalProvisioning                 persistentvolumeclaim/pvc-fcb2q                                                  waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-8708\" or manually created by system administrator\ncsi-mock-volumes-8708                22s         Normal    Provisioning                         persistentvolumeclaim/pvc-fcb2q                                                  External provisioner is provisioning volume for claim \"csi-mock-volumes-8708/pvc-fcb2q\"\ncsi-mock-volumes-8708                22s         Normal    ProvisioningSucceeded                persistentvolumeclaim/pvc-fcb2q                                                  Successfully provisioned volume pvc-ce2eee63-5b1e-4a00-b21e-ff3e0e2af5ce\ncsi-mock-volumes-8708                17s         Normal    SuccessfulAttachVolume               pod/pvc-volume-tester-6szrj                                                      AttachVolume.Attach succeeded for volume \"pvc-ce2eee63-5b1e-4a00-b21e-ff3e0e2af5ce\"\ndefault                              18m         Normal    RegisteredNode                       node/bootstrap-e2e-master                                                        Node bootstrap-e2e-master event: Registered Node bootstrap-e2e-master in Controller\ndefault                              18m         Normal    Starting                             node/bootstrap-e2e-minion-group-5wcz                                             Starting kubelet.\ndefault                              18m         Normal    NodeHasSufficientMemory              node/bootstrap-e2e-minion-group-5wcz                                             Node bootstrap-e2e-minion-group-5wcz status is now: NodeHasSufficientMemory\ndefault                              18m         Normal    NodeHasNoDiskPressure                node/bootstrap-e2e-minion-group-5wcz                                             Node bootstrap-e2e-minion-group-5wcz status is now: NodeHasNoDiskPressure\ndefault                              18m         Normal    NodeHasSufficientPID                 node/bootstrap-e2e-minion-group-5wcz                                             Node bootstrap-e2e-minion-group-5wcz status is now: NodeHasSufficientPID\ndefault                              18m         Normal    NodeAllocatableEnforced              node/bootstrap-e2e-minion-group-5wcz                                             Updated Node Allocatable limit across pods\ndefault                              18m         Normal    NodeReady                            node/bootstrap-e2e-minion-group-5wcz                                             Node bootstrap-e2e-minion-group-5wcz status is now: NodeReady\ndefault                              18m         Warning   ContainerdStart                      node/bootstrap-e2e-minion-group-5wcz                                             Starting containerd container runtime...\ndefault                              18m         Warning   DockerStart                          node/bootstrap-e2e-minion-group-5wcz                                             Starting Docker Application Container Engine...\ndefault                              18m         Warning   KubeletStart                         node/bootstrap-e2e-minion-group-5wcz                                             Started Kubernetes kubelet.\ndefault                              18m         Normal    Starting                             node/bootstrap-e2e-minion-group-5wcz                                             Starting kube-proxy.\ndefault                              18m         Normal    RegisteredNode                       node/bootstrap-e2e-minion-group-5wcz                                             Node bootstrap-e2e-minion-group-5wcz event: Registered Node bootstrap-e2e-minion-group-5wcz in Controller\ndefault                              9m35s       Warning   TaskHung                             node/bootstrap-e2e-minion-group-5wcz                                             kernel: INFO: task umount.aufs:21568 blocked for more than 120 seconds.\ndefault                              9m35s       Normal    AUFSUmountHung                       node/bootstrap-e2e-minion-group-5wcz                                             Node condition KernelDeadlock is now: True, reason: AUFSUmountHung\ndefault                              18m         Normal    Starting                             node/bootstrap-e2e-minion-group-9dh8                                             Starting kubelet.\ndefault                              18m         Normal    NodeHasSufficientMemory              node/bootstrap-e2e-minion-group-9dh8                                             Node bootstrap-e2e-minion-group-9dh8 status is now: NodeHasSufficientMemory\ndefault                              18m         Normal    NodeHasNoDiskPressure                node/bootstrap-e2e-minion-group-9dh8                                             Node bootstrap-e2e-minion-group-9dh8 status is now: NodeHasNoDiskPressure\ndefault                              18m         Normal    NodeHasSufficientPID                 node/bootstrap-e2e-minion-group-9dh8                                             Node bootstrap-e2e-minion-group-9dh8 status is now: NodeHasSufficientPID\ndefault                              18m         Normal    NodeAllocatableEnforced              node/bootstrap-e2e-minion-group-9dh8                                             Updated Node Allocatable limit across pods\ndefault                              18m         Normal    Starting                             node/bootstrap-e2e-minion-group-9dh8                                             Starting kube-proxy.\ndefault                              18m         Normal    RegisteredNode                       node/bootstrap-e2e-minion-group-9dh8                                             Node bootstrap-e2e-minion-group-9dh8 event: Registered Node bootstrap-e2e-minion-group-9dh8 in Controller\ndefault                              18m         Warning   ContainerdStart                      node/bootstrap-e2e-minion-group-9dh8                                             Starting containerd container runtime...\ndefault                              18m         Warning   DockerStart                          node/bootstrap-e2e-minion-group-9dh8                                             Starting Docker Application Container Engine...\ndefault                              18m         Warning   KubeletStart                         node/bootstrap-e2e-minion-group-9dh8                                             Started Kubernetes kubelet.\ndefault                              18m         Normal    NodeReady                            node/bootstrap-e2e-minion-group-9dh8                                             Node bootstrap-e2e-minion-group-9dh8 status is now: NodeReady\ndefault                              9m31s       Warning   TaskHung                             node/bootstrap-e2e-minion-group-9dh8                                             kernel: INFO: task umount.aufs:21568 blocked for more than 120 seconds.\ndefault                              9m31s       Normal    AUFSUmountHung                       node/bootstrap-e2e-minion-group-9dh8                                             Node condition KernelDeadlock is now: True, reason: AUFSUmountHung\ndefault                              18m         Normal    Starting                             node/bootstrap-e2e-minion-group-mnwl                                             Starting kubelet.\ndefault                              18m         Normal    NodeHasSufficientMemory              node/bootstrap-e2e-minion-group-mnwl                                             Node bootstrap-e2e-minion-group-mnwl status is now: NodeHasSufficientMemory\ndefault                              18m         Normal    NodeHasNoDiskPressure                node/bootstrap-e2e-minion-group-mnwl                                             Node bootstrap-e2e-minion-group-mnwl status is now: NodeHasNoDiskPressure\ndefault                              18m         Normal    NodeHasSufficientPID                 node/bootstrap-e2e-minion-group-mnwl                                             Node bootstrap-e2e-minion-group-mnwl status is now: NodeHasSufficientPID\ndefault                              18m         Normal    NodeAllocatableEnforced              node/bootstrap-e2e-minion-group-mnwl                                             Updated Node Allocatable limit across pods\ndefault                              18m         Normal    NodeReady                            node/bootstrap-e2e-minion-group-mnwl                                             Node bootstrap-e2e-minion-group-mnwl status is now: NodeReady\ndefault                              18m         Normal    Starting                             node/bootstrap-e2e-minion-group-mnwl                                             Starting kube-proxy.\ndefault                              18m         Normal    RegisteredNode                       node/bootstrap-e2e-minion-group-mnwl                                             Node bootstrap-e2e-minion-group-mnwl event: Registered Node bootstrap-e2e-minion-group-mnwl in Controller\ndefault                              18m         Warning   ContainerdStart                      node/bootstrap-e2e-minion-group-mnwl                                             Starting containerd container runtime...\ndefault                              18m         Warning   DockerStart                          node/bootstrap-e2e-minion-group-mnwl                                             Starting Docker Application Container Engine...\ndefault                              18m         Warning   KubeletStart                         node/bootstrap-e2e-minion-group-mnwl                                             Started Kubernetes kubelet.\ndefault                              9m28s       Warning   TaskHung                             node/bootstrap-e2e-minion-group-mnwl                                             kernel: INFO: task umount.aufs:21568 blocked for more than 120 seconds.\ndefault                              9m28s       Normal    AUFSUmountHung                       node/bootstrap-e2e-minion-group-mnwl                                             Node condition KernelDeadlock is now: True, reason: AUFSUmountHung\ndefault                              18m         Normal    Starting                             node/bootstrap-e2e-minion-group-n0jl                                             Starting kubelet.\ndefault                              18m         Normal    NodeHasSufficientMemory              node/bootstrap-e2e-minion-group-n0jl                                             Node bootstrap-e2e-minion-group-n0jl status is now: NodeHasSufficientMemory\ndefault                              18m         Normal    NodeHasNoDiskPressure                node/bootstrap-e2e-minion-group-n0jl                                             Node bootstrap-e2e-minion-group-n0jl status is now: NodeHasNoDiskPressure\ndefault                              18m         Normal    NodeHasSufficientPID                 node/bootstrap-e2e-minion-group-n0jl                                             Node bootstrap-e2e-minion-group-n0jl status is now: NodeHasSufficientPID\ndefault                              18m         Normal    NodeAllocatableEnforced              node/bootstrap-e2e-minion-group-n0jl                                             Updated Node Allocatable limit across pods\ndefault                              18m         Normal    NodeReady                            node/bootstrap-e2e-minion-group-n0jl                                             Node bootstrap-e2e-minion-group-n0jl status is now: NodeReady\ndefault                              18m         Warning   ContainerdStart                      node/bootstrap-e2e-minion-group-n0jl                                             Starting containerd container runtime...\ndefault                              18m         Warning   DockerStart                          node/bootstrap-e2e-minion-group-n0jl                                             Starting Docker Application Container Engine...\ndefault                              18m         Warning   KubeletStart                         node/bootstrap-e2e-minion-group-n0jl                                             Started Kubernetes kubelet.\ndefault                              18m         Normal    Starting                             node/bootstrap-e2e-minion-group-n0jl                                             Starting kube-proxy.\ndefault                              18m         Normal    RegisteredNode                       node/bootstrap-e2e-minion-group-n0jl                                             Node bootstrap-e2e-minion-group-n0jl event: Registered Node bootstrap-e2e-minion-group-n0jl in Controller\ndefault                              9m24s       Warning   TaskHung                             node/bootstrap-e2e-minion-group-n0jl                                             kernel: INFO: task umount.aufs:21568 blocked for more than 120 seconds.\ndefault                              9m24s       Normal    AUFSUmountHung                       node/bootstrap-e2e-minion-group-n0jl                                             Node condition KernelDeadlock is now: True, reason: AUFSUmountHung\ndefault                              8m32s       Warning   FailedToCreateEndpoint               endpoints/latency-svc-vrhnq                                                      Failed to create endpoint for service svc-latency-3850/latency-svc-vrhnq: endpoints \"latency-svc-vrhnq\" already exists\ndefault                              14m         Normal    RecyclerPod                          persistentvolume/nfs-5nx7n                                                       Recycler pod: Successfully assigned default/recycler-for-nfs-5nx7n to bootstrap-e2e-minion-group-mnwl\ndefault                              14m         Normal    RecyclerPod                          persistentvolume/nfs-5nx7n                                                       Recycler pod: Pulling image \"k8s.gcr.io/busybox:1.27\"\ndefault                              14m         Normal    RecyclerPod                          persistentvolume/nfs-5nx7n                                                       Recycler pod: Successfully pulled image \"k8s.gcr.io/busybox:1.27\"\ndefault                              14m         Normal    RecyclerPod                          persistentvolume/nfs-5nx7n                                                       Recycler pod: Created container pv-recycler\ndefault                              14m         Normal    RecyclerPod                          persistentvolume/nfs-5nx7n                                                       Recycler pod: Started container pv-recycler\ndefault                              14m         Normal    VolumeRecycled                       persistentvolume/nfs-5nx7n                                                       Volume recycled\ndefault                              14m         Normal    RecyclerPod                          persistentvolume/nfs-5nx7n                                                       Recycler pod: Container image \"k8s.gcr.io/busybox:1.27\" already present on machine\ndefault                              9m14s       Normal    RecyclerPod                          persistentvolume/nfs-5nx7n                                                       Recycler pod: Pod was active on the node longer than the specified deadline\ndefault                              9m13s       Normal    RecyclerPod                          persistentvolume/nfs-5nx7n                                                       Recycler pod: Stopping container pv-recycler\ndefault                              11m         Normal    VolumeDelete                         persistentvolume/pvc-0ef0781f-96d1-4447-ae5c-c8b55bc46b40                        googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-0ef0781f-96d1-4447-ae5c-c8b55bc46b40' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-mnwl', resourceInUseByAnotherResource\ndefault                              14m         Normal    VolumeDelete                         persistentvolume/pvc-1a1ef5a8-a9ab-4c08-9a21-3b9a9258c9c9                        googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-1a1ef5a8-a9ab-4c08-9a21-3b9a9258c9c9' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-5wcz', resourceInUseByAnotherResource\ndefault                              6m57s       Normal    VolumeDelete                         persistentvolume/pvc-1a460b69-a475-440b-a9f4-8fcaa1455aec                        googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-1a460b69-a475-440b-a9f4-8fcaa1455aec' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-9dh8', resourceInUseByAnotherResource\ndefault                              8m2s        Normal    VolumeDelete                         persistentvolume/pvc-236a6bb9-a806-441b-8813-b28ce9227a09                        googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-236a6bb9-a806-441b-8813-b28ce9227a09' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-mnwl', resourceInUseByAnotherResource\ndefault                              7m35s       Normal    VolumeDelete                         persistentvolume/pvc-3a5a5a9b-f51c-4971-98db-0886e6877af1                        googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-3a5a5a9b-f51c-4971-98db-0886e6877af1' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-9dh8', resourceInUseByAnotherResource\ndefault                              9m          Normal    VolumeDelete                         persistentvolume/pvc-434c45dc-6957-4c61-80ad-9342c0997d8b                        googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-434c45dc-6957-4c61-80ad-9342c0997d8b' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-9dh8', resourceInUseByAnotherResource\ndefault                              3m22s       Normal    VolumeDelete                         persistentvolume/pvc-51ba9a7b-d48f-42a1-9f0b-a2a3f9ec05ee                        googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-51ba9a7b-d48f-42a1-9f0b-a2a3f9ec05ee' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-mnwl', resourceInUseByAnotherResource\ndefault                              9m3s        Normal    VolumeDelete                         persistentvolume/pvc-5f0fa792-6ce6-4d3f-b39b-12ce9b959822                        googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-5f0fa792-6ce6-4d3f-b39b-12ce9b959822' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-5wcz', resourceInUseByAnotherResource\ndefault                              2m6s        Normal    VolumeDelete                         persistentvolume/pvc-64d10ac6-746c-481f-934d-8ec22c7f2716                        googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-64d10ac6-746c-481f-934d-8ec22c7f2716' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-n0jl', resourceInUseByAnotherResource\ndefault                              7s          Normal    VolumeDelete                         persistentvolume/pvc-89343211-bcb2-4e18-951f-1633dceab414                        googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-89343211-bcb2-4e18-951f-1633dceab414' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-mnwl', resourceInUseByAnotherResource\ndefault                              2m58s       Normal    VolumeDelete                         persistentvolume/pvc-a66cb519-4ceb-4fbd-bdb5-d3e0270b8fa3                        googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-a66cb519-4ceb-4fbd-bdb5-d3e0270b8fa3' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-5wcz', resourceInUseByAnotherResource\ndefault                              5m16s       Normal    VolumeDelete                         persistentvolume/pvc-de807cfb-4305-4809-b434-5bc500e4cde9                        googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/disks/bootstrap-e2e-dynamic-pvc-de807cfb-4305-4809-b434-5bc500e4cde9' is already being used by 'projects/k8s-jkns-e2e-gke-ubuntu-slow/zones/us-west1-b/instances/bootstrap-e2e-minion-group-5wcz', resourceInUseByAnotherResource\ndefault                              14m         Normal    Scheduled                            pod/recycler-for-nfs-5nx7n                                                       Successfully assigned default/recycler-for-nfs-5nx7n to bootstrap-e2e-minion-group-mnwl\ndefault                              14m         Normal    Pulling                              pod/recycler-for-nfs-5nx7n                                                       Pulling image \"k8s.gcr.io/busybox:1.27\"\ndefault                              14m         Normal    Pulled                               pod/recycler-for-nfs-5nx7n                                                       Successfully pulled image \"k8s.gcr.io/busybox:1.27\"\ndefault                              14m         Normal    Created                              pod/recycler-for-nfs-5nx7n                                                       Created container pv-recycler\ndefault                              14m         Normal    Started                              pod/recycler-for-nfs-5nx7n                                                       Started container pv-recycler\ndefault                              14m         Normal    Scheduled                            pod/recycler-for-nfs-5nx7n                                                       Successfully assigned default/recycler-for-nfs-5nx7n to bootstrap-e2e-minion-group-mnwl\ndefault                              14m         Normal    Pulled                               pod/recycler-for-nfs-5nx7n                                                       Container image \"k8s.gcr.io/busybox:1.27\" already present on machine\ndefault                              14m         Normal    Created                              pod/recycler-for-nfs-5nx7n                                                       Created container pv-recycler\ndefault                              14m         Normal    Started                              pod/recycler-for-nfs-5nx7n                                                       Started container pv-recycler\ndefault                              9m14s       Normal    DeadlineExceeded                     pod/recycler-for-nfs-5nx7n                                                       Pod was active on the node longer than the specified deadline\ndefault                              9m14s       Normal    Killing                              pod/recycler-for-nfs-5nx7n                                                       Stopping container pv-recycler\ndeployment-2085                      3m22s       Normal    Scheduled                            pod/test-cleanup-controller-gjkvp                                                Successfully assigned deployment-2085/test-cleanup-controller-gjkvp to bootstrap-e2e-minion-group-5wcz\ndeployment-2085                      3m21s       Normal    Pulled                               pod/test-cleanup-controller-gjkvp                                                Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-2085                      3m20s       Normal    Created                              pod/test-cleanup-controller-gjkvp                                                Created container httpd\ndeployment-2085                      3m19s       Normal    Started                              pod/test-cleanup-controller-gjkvp                                                Started container httpd\ndeployment-2085                      3m10s       Normal    Killing                              pod/test-cleanup-controller-gjkvp                                                Stopping container httpd\ndeployment-2085                      3m22s       Normal    SuccessfulCreate                     replicaset/test-cleanup-controller                                               Created pod: test-cleanup-controller-gjkvp\ndeployment-2085                      3m10s       Normal    SuccessfulDelete                     replicaset/test-cleanup-controller                                               Deleted pod: test-cleanup-controller-gjkvp\ndeployment-2085                      3m14s       Normal    Scheduled                            pod/test-cleanup-deployment-55ffc6b7b6-9t6sx                                     Successfully assigned deployment-2085/test-cleanup-deployment-55ffc6b7b6-9t6sx to bootstrap-e2e-minion-group-mnwl\ndeployment-2085                      3m13s       Normal    Pulled                               pod/test-cleanup-deployment-55ffc6b7b6-9t6sx                                     Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ndeployment-2085                      3m13s       Normal    Created                              pod/test-cleanup-deployment-55ffc6b7b6-9t6sx                                     Created container agnhost\ndeployment-2085                      3m12s       Normal    Started                              pod/test-cleanup-deployment-55ffc6b7b6-9t6sx                                     Started container agnhost\ndeployment-2085                      3m15s       Normal    SuccessfulCreate                     replicaset/test-cleanup-deployment-55ffc6b7b6                                    Created pod: test-cleanup-deployment-55ffc6b7b6-9t6sx\ndeployment-2085                      3m15s       Normal    ScalingReplicaSet                    deployment/test-cleanup-deployment                                               Scaled up replica set test-cleanup-deployment-55ffc6b7b6 to 1\ndeployment-2085                      3m10s       Normal    ScalingReplicaSet                    deployment/test-cleanup-deployment                                               Scaled down replica set test-cleanup-controller to 0\ndeployment-3447                      93s         Normal    Scheduled                            pod/test-rollover-controller-6pwr8                                               Successfully assigned deployment-3447/test-rollover-controller-6pwr8 to bootstrap-e2e-minion-group-mnwl\ndeployment-3447                      92s         Normal    Pulled                               pod/test-rollover-controller-6pwr8                                               Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-3447                      91s         Normal    Created                              pod/test-rollover-controller-6pwr8                                               Created container httpd\ndeployment-3447                      90s         Normal    Started                              pod/test-rollover-controller-6pwr8                                               Started container httpd\ndeployment-3447                      51s         Normal    Killing                              pod/test-rollover-controller-6pwr8                                               Stopping container httpd\ndeployment-3447                      93s         Normal    SuccessfulCreate                     replicaset/test-rollover-controller                                              Created pod: test-rollover-controller-6pwr8\ndeployment-3447                      51s         Normal    SuccessfulDelete                     replicaset/test-rollover-controller                                              Deleted pod: test-rollover-controller-6pwr8\ndeployment-3447                      80s         Normal    Scheduled                            pod/test-rollover-deployment-574d6dfbff-bn2zt                                    Successfully assigned deployment-3447/test-rollover-deployment-574d6dfbff-bn2zt to bootstrap-e2e-minion-group-n0jl\ndeployment-3447                      72s         Normal    Pulled                               pod/test-rollover-deployment-574d6dfbff-bn2zt                                    Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ndeployment-3447                      72s         Normal    Created                              pod/test-rollover-deployment-574d6dfbff-bn2zt                                    Created container agnhost\ndeployment-3447                      71s         Normal    Started                              pod/test-rollover-deployment-574d6dfbff-bn2zt                                    Started container agnhost\ndeployment-3447                      80s         Normal    SuccessfulCreate                     replicaset/test-rollover-deployment-574d6dfbff                                   Created pod: test-rollover-deployment-574d6dfbff-bn2zt\ndeployment-3447                      84s         Normal    Scheduled                            pod/test-rollover-deployment-f6c94f66c-j8qr6                                     Successfully assigned deployment-3447/test-rollover-deployment-f6c94f66c-j8qr6 to bootstrap-e2e-minion-group-9dh8\ndeployment-3447                      77s         Warning   FailedCreatePodSandBox               pod/test-rollover-deployment-f6c94f66c-j8qr6                                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod \"test-rollover-deployment-f6c94f66c-j8qr6\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"process_linux.go:419: writing syncT 'resume' caused \\\\\\\"write init-p: broken pipe\\\\\\\"\\\"\": unknown\ndeployment-3447                      84s         Normal    SuccessfulCreate                     replicaset/test-rollover-deployment-f6c94f66c                                    Created pod: test-rollover-deployment-f6c94f66c-j8qr6\ndeployment-3447                      80s         Normal    SuccessfulDelete                     replicaset/test-rollover-deployment-f6c94f66c                                    Deleted pod: test-rollover-deployment-f6c94f66c-j8qr6\ndeployment-3447                      84s         Normal    ScalingReplicaSet                    deployment/test-rollover-deployment                                              Scaled up replica set test-rollover-deployment-f6c94f66c to 1\ndeployment-3447                      80s         Normal    ScalingReplicaSet                    deployment/test-rollover-deployment                                              Scaled down replica set test-rollover-deployment-f6c94f66c to 0\ndeployment-3447                      80s         Normal    ScalingReplicaSet                    deployment/test-rollover-deployment                                              Scaled up replica set test-rollover-deployment-574d6dfbff to 1\ndeployment-3447                      51s         Normal    ScalingReplicaSet                    deployment/test-rollover-deployment                                              Scaled down replica set test-rollover-controller to 0\ndeployment-7737                      101s        Normal    Scheduled                            pod/webserver-595b5b9587-5zzc2                                                   Successfully assigned deployment-7737/webserver-595b5b9587-5zzc2 to bootstrap-e2e-minion-group-n0jl\ndeployment-7737                      97s         Normal    Pulled                               pod/webserver-595b5b9587-5zzc2                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      96s         Normal    Created                              pod/webserver-595b5b9587-5zzc2                                                   Created container httpd\ndeployment-7737                      96s         Normal    Started                              pod/webserver-595b5b9587-5zzc2                                                   Started container httpd\ndeployment-7737                      82s         Normal    Killing                              pod/webserver-595b5b9587-5zzc2                                                   Stopping container httpd\ndeployment-7737                      88s         Normal    Scheduled                            pod/webserver-595b5b9587-bc5w9                                                   Successfully assigned deployment-7737/webserver-595b5b9587-bc5w9 to bootstrap-e2e-minion-group-9dh8\ndeployment-7737                      84s         Normal    Pulled                               pod/webserver-595b5b9587-bc5w9                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      83s         Normal    Created                              pod/webserver-595b5b9587-bc5w9                                                   Created container httpd\ndeployment-7737                      80s         Warning   Failed                               pod/webserver-595b5b9587-bc5w9                                                   Error: failed to start container \"httpd\": Error response from daemon: OCI runtime start failed: cannot start an already running container: unknown\ndeployment-7737                      81s         Normal    Scheduled                            pod/webserver-595b5b9587-bvzxl                                                   Successfully assigned deployment-7737/webserver-595b5b9587-bvzxl to bootstrap-e2e-minion-group-9dh8\ndeployment-7737                      74s         Normal    Pulled                               pod/webserver-595b5b9587-bvzxl                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      74s         Warning   Failed                               pod/webserver-595b5b9587-bvzxl                                                   Error: cannot find volume \"default-token-mpc9q\" to mount into container \"httpd\"\ndeployment-7737                      100s        Normal    Scheduled                            pod/webserver-595b5b9587-h6qtq                                                   Successfully assigned deployment-7737/webserver-595b5b9587-h6qtq to bootstrap-e2e-minion-group-9dh8\ndeployment-7737                      96s         Normal    Pulled                               pod/webserver-595b5b9587-h6qtq                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      96s         Normal    Created                              pod/webserver-595b5b9587-h6qtq                                                   Created container httpd\ndeployment-7737                      94s         Normal    Started                              pod/webserver-595b5b9587-h6qtq                                                   Started container httpd\ndeployment-7737                      89s         Normal    Killing                              pod/webserver-595b5b9587-h6qtq                                                   Stopping container httpd\ndeployment-7737                      82s         Normal    Scheduled                            pod/webserver-595b5b9587-hsc6t                                                   Successfully assigned deployment-7737/webserver-595b5b9587-hsc6t to bootstrap-e2e-minion-group-n0jl\ndeployment-7737                      73s         Normal    Pulled                               pod/webserver-595b5b9587-hsc6t                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      73s         Normal    Created                              pod/webserver-595b5b9587-hsc6t                                                   Created container httpd\ndeployment-7737                      71s         Normal    Started                              pod/webserver-595b5b9587-hsc6t                                                   Started container httpd\ndeployment-7737                      100s        Normal    Scheduled                            pod/webserver-595b5b9587-jftqr                                                   Successfully assigned deployment-7737/webserver-595b5b9587-jftqr to bootstrap-e2e-minion-group-n0jl\ndeployment-7737                      95s         Normal    Pulled                               pod/webserver-595b5b9587-jftqr                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      95s         Normal    Created                              pod/webserver-595b5b9587-jftqr                                                   Created container httpd\ndeployment-7737                      93s         Normal    Started                              pod/webserver-595b5b9587-jftqr                                                   Started container httpd\ndeployment-7737                      90s         Normal    Killing                              pod/webserver-595b5b9587-jftqr                                                   Stopping container httpd\ndeployment-7737                      100s        Normal    Scheduled                            pod/webserver-595b5b9587-jfxfs                                                   Successfully assigned deployment-7737/webserver-595b5b9587-jfxfs to bootstrap-e2e-minion-group-n0jl\ndeployment-7737                      97s         Normal    Pulled                               pod/webserver-595b5b9587-jfxfs                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      97s         Normal    Created                              pod/webserver-595b5b9587-jfxfs                                                   Created container httpd\ndeployment-7737                      96s         Normal    Started                              pod/webserver-595b5b9587-jfxfs                                                   Started container httpd\ndeployment-7737                      82s         Normal    Killing                              pod/webserver-595b5b9587-jfxfs                                                   Stopping container httpd\ndeployment-7737                      100s        Normal    Scheduled                            pod/webserver-595b5b9587-m2626                                                   Successfully assigned deployment-7737/webserver-595b5b9587-m2626 to bootstrap-e2e-minion-group-n0jl\ndeployment-7737                      96s         Normal    Pulled                               pod/webserver-595b5b9587-m2626                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      96s         Normal    Created                              pod/webserver-595b5b9587-m2626                                                   Created container httpd\ndeployment-7737                      94s         Normal    Started                              pod/webserver-595b5b9587-m2626                                                   Started container httpd\ndeployment-7737                      82s         Normal    Killing                              pod/webserver-595b5b9587-m2626                                                   Stopping container httpd\ndeployment-7737                      82s         Normal    Scheduled                            pod/webserver-595b5b9587-tz5jv                                                   Successfully assigned deployment-7737/webserver-595b5b9587-tz5jv to bootstrap-e2e-minion-group-mnwl\ndeployment-7737                      81s         Warning   FailedCreatePodSandBox               pod/webserver-595b5b9587-tz5jv                                                   Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod \"webserver-595b5b9587-tz5jv\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"process_linux.go:413: running prestart hook 0 caused \\\\\\\"error running hook: exit status 1, stdout: , stderr: time=\\\\\\\\\\\\\\\"2020-01-16T07:04:59Z\\\\\\\\\\\\\\\" level=fatal msg=\\\\\\\\\\\\\\\"no such file or directory\\\\\\\\\\\\\\\"\\\\\\\\n\\\\\\\"\\\"\": unknown\ndeployment-7737                      100s        Normal    Scheduled                            pod/webserver-595b5b9587-v9qgj                                                   Successfully assigned deployment-7737/webserver-595b5b9587-v9qgj to bootstrap-e2e-minion-group-9dh8\ndeployment-7737                      96s         Normal    Pulled                               pod/webserver-595b5b9587-v9qgj                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      96s         Normal    Created                              pod/webserver-595b5b9587-v9qgj                                                   Created container httpd\ndeployment-7737                      93s         Normal    Started                              pod/webserver-595b5b9587-v9qgj                                                   Started container httpd\ndeployment-7737                      89s         Normal    Killing                              pod/webserver-595b5b9587-v9qgj                                                   Stopping container httpd\ndeployment-7737                      88s         Normal    Scheduled                            pod/webserver-595b5b9587-zw2h7                                                   Successfully assigned deployment-7737/webserver-595b5b9587-zw2h7 to bootstrap-e2e-minion-group-9dh8\ndeployment-7737                      84s         Normal    Pulled                               pod/webserver-595b5b9587-zw2h7                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      84s         Normal    Created                              pod/webserver-595b5b9587-zw2h7                                                   Created container httpd\ndeployment-7737                      83s         Warning   Failed                               pod/webserver-595b5b9587-zw2h7                                                   Error: failed to start container \"httpd\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/var/lib/kubelet/pods/6bb52985-aa1a-43c3-8bae-2e889b1a8b84/volumes/kubernetes.io~secret/default-token-mpc9q\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/overlay2/9e1a69f7b52c2bf9e26ea597e4858d32bb28851f56b3a8c619fb0d4a86b5c9ab/merged\\\\\\\" at \\\\\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\\\\\" caused \\\\\\\"stat /var/lib/kubelet/pods/6bb52985-aa1a-43c3-8bae-2e889b1a8b84/volumes/kubernetes.io~secret/default-token-mpc9q: no such file or directory\\\\\\\"\\\"\": unknown\ndeployment-7737                      101s        Normal    SuccessfulCreate                     replicaset/webserver-595b5b9587                                                  Created pod: webserver-595b5b9587-5zzc2\ndeployment-7737                      101s        Normal    SuccessfulCreate                     replicaset/webserver-595b5b9587                                                  Created pod: webserver-595b5b9587-h6qtq\ndeployment-7737                      100s        Normal    SuccessfulCreate                     replicaset/webserver-595b5b9587                                                  Created pod: webserver-595b5b9587-jfxfs\ndeployment-7737                      100s        Normal    SuccessfulCreate                     replicaset/webserver-595b5b9587                                                  Created pod: webserver-595b5b9587-v9qgj\ndeployment-7737                      100s        Normal    SuccessfulCreate                     replicaset/webserver-595b5b9587                                                  Created pod: webserver-595b5b9587-m2626\ndeployment-7737                      100s        Normal    SuccessfulCreate                     replicaset/webserver-595b5b9587                                                  Created pod: webserver-595b5b9587-jftqr\ndeployment-7737                      90s         Normal    SuccessfulDelete                     replicaset/webserver-595b5b9587                                                  Deleted pod: webserver-595b5b9587-jftqr\ndeployment-7737                      89s         Normal    SuccessfulCreate                     replicaset/webserver-595b5b9587                                                  Created pod: webserver-595b5b9587-zw2h7\ndeployment-7737                      88s         Normal    SuccessfulCreate                     replicaset/webserver-595b5b9587                                                  Created pod: webserver-595b5b9587-bc5w9\ndeployment-7737                      85s         Normal    SuccessfulDelete                     replicaset/webserver-595b5b9587                                                  Deleted pod: webserver-595b5b9587-zw2h7\ndeployment-7737                      83s         Normal    SuccessfulDelete                     replicaset/webserver-595b5b9587                                                  Deleted pod: webserver-595b5b9587-bc5w9\ndeployment-7737                      82s         Normal    SuccessfulCreate                     replicaset/webserver-595b5b9587                                                  Created pod: webserver-595b5b9587-tz5jv\ndeployment-7737                      82s         Normal    SuccessfulDelete                     replicaset/webserver-595b5b9587                                                  Deleted pod: webserver-595b5b9587-tz5jv\ndeployment-7737                      81s         Normal    SuccessfulCreate                     replicaset/webserver-595b5b9587                                                  (combined from similar events): Created pod: webserver-595b5b9587-bvzxl\ndeployment-7737                      75s         Normal    SuccessfulDelete                     replicaset/webserver-595b5b9587                                                  Deleted pod: webserver-595b5b9587-bvzxl\ndeployment-7737                      75s         Normal    SuccessfulDelete                     replicaset/webserver-595b5b9587                                                  Deleted pod: webserver-595b5b9587-hsc6t\ndeployment-7737                      82s         Normal    Scheduled                            pod/webserver-6f4df6d875-9lfl5                                                   Successfully assigned deployment-7737/webserver-6f4df6d875-9lfl5 to bootstrap-e2e-minion-group-mnwl\ndeployment-7737                      79s         Normal    Pulled                               pod/webserver-6f4df6d875-9lfl5                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      79s         Normal    Created                              pod/webserver-6f4df6d875-9lfl5                                                   Created container httpd\ndeployment-7737                      78s         Normal    Started                              pod/webserver-6f4df6d875-9lfl5                                                   Started container httpd\ndeployment-7737                      50s         Normal    Killing                              pod/webserver-6f4df6d875-9lfl5                                                   Stopping container httpd\ndeployment-7737                      68s         Normal    Scheduled                            pod/webserver-6f4df6d875-dmz5n                                                   Successfully assigned deployment-7737/webserver-6f4df6d875-dmz5n to bootstrap-e2e-minion-group-5wcz\ndeployment-7737                      63s         Normal    Pulled                               pod/webserver-6f4df6d875-dmz5n                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      63s         Normal    Created                              pod/webserver-6f4df6d875-dmz5n                                                   Created container httpd\ndeployment-7737                      62s         Normal    Started                              pod/webserver-6f4df6d875-dmz5n                                                   Started container httpd\ndeployment-7737                      60s         Normal    Killing                              pod/webserver-6f4df6d875-dmz5n                                                   Stopping container httpd\ndeployment-7737                      82s         Normal    Scheduled                            pod/webserver-6f4df6d875-dt67w                                                   Successfully assigned deployment-7737/webserver-6f4df6d875-dt67w to bootstrap-e2e-minion-group-mnwl\ndeployment-7737                      80s         Normal    Pulled                               pod/webserver-6f4df6d875-dt67w                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      80s         Normal    Created                              pod/webserver-6f4df6d875-dt67w                                                   Created container httpd\ndeployment-7737                      78s         Normal    Started                              pod/webserver-6f4df6d875-dt67w                                                   Started container httpd\ndeployment-7737                      74s         Normal    Killing                              pod/webserver-6f4df6d875-dt67w                                                   Stopping container httpd\ndeployment-7737                      82s         Normal    Scheduled                            pod/webserver-6f4df6d875-fp696                                                   Successfully assigned deployment-7737/webserver-6f4df6d875-fp696 to bootstrap-e2e-minion-group-mnwl\ndeployment-7737                      79s         Normal    Pulled                               pod/webserver-6f4df6d875-fp696                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      78s         Normal    Created                              pod/webserver-6f4df6d875-fp696                                                   Created container httpd\ndeployment-7737                      78s         Normal    Started                              pod/webserver-6f4df6d875-fp696                                                   Started container httpd\ndeployment-7737                      68s         Normal    Killing                              pod/webserver-6f4df6d875-fp696                                                   Stopping container httpd\ndeployment-7737                      70s         Normal    Scheduled                            pod/webserver-6f4df6d875-hljlx                                                   Successfully assigned deployment-7737/webserver-6f4df6d875-hljlx to bootstrap-e2e-minion-group-mnwl\ndeployment-7737                      65s         Warning   FailedCreatePodSandBox               pod/webserver-6f4df6d875-hljlx                                                   Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod \"webserver-6f4df6d875-hljlx\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"process_linux.go:419: writing syncT 'resume' caused \\\\\\\"write init-p: broken pipe\\\\\\\"\\\"\": unknown\ndeployment-7737                      70s         Normal    Scheduled                            pod/webserver-6f4df6d875-kgfvl                                                   Successfully assigned deployment-7737/webserver-6f4df6d875-kgfvl to bootstrap-e2e-minion-group-n0jl\ndeployment-7737                      65s         Normal    Pulled                               pod/webserver-6f4df6d875-kgfvl                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      64s         Normal    Created                              pod/webserver-6f4df6d875-kgfvl                                                   Created container httpd\ndeployment-7737                      64s         Normal    Started                              pod/webserver-6f4df6d875-kgfvl                                                   Started container httpd\ndeployment-7737                      34s         Normal    Killing                              pod/webserver-6f4df6d875-kgfvl                                                   Stopping container httpd\ndeployment-7737                      32s         Normal    Scheduled                            pod/webserver-6f4df6d875-kpr6t                                                   Successfully assigned deployment-7737/webserver-6f4df6d875-kpr6t to bootstrap-e2e-minion-group-n0jl\ndeployment-7737                      31s         Normal    Pulled                               pod/webserver-6f4df6d875-kpr6t                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      31s         Normal    Created                              pod/webserver-6f4df6d875-kpr6t                                                   Created container httpd\ndeployment-7737                      31s         Normal    Started                              pod/webserver-6f4df6d875-kpr6t                                                   Started container httpd\ndeployment-7737                      17s         Normal    Killing                              pod/webserver-6f4df6d875-kpr6t                                                   Stopping container httpd\ndeployment-7737                      50s         Normal    Scheduled                            pod/webserver-6f4df6d875-pbjds                                                   Successfully assigned deployment-7737/webserver-6f4df6d875-pbjds to bootstrap-e2e-minion-group-mnwl\ndeployment-7737                      49s         Normal    Pulled                               pod/webserver-6f4df6d875-pbjds                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      49s         Normal    Created                              pod/webserver-6f4df6d875-pbjds                                                   Created container httpd\ndeployment-7737                      48s         Normal    Started                              pod/webserver-6f4df6d875-pbjds                                                   Started container httpd\ndeployment-7737                      42s         Normal    Killing                              pod/webserver-6f4df6d875-pbjds                                                   Stopping container httpd\ndeployment-7737                      70s         Normal    Scheduled                            pod/webserver-6f4df6d875-vb7cj                                                   Successfully assigned deployment-7737/webserver-6f4df6d875-vb7cj to bootstrap-e2e-minion-group-5wcz\ndeployment-7737                      64s         Normal    Pulled                               pod/webserver-6f4df6d875-vb7cj                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      64s         Normal    Created                              pod/webserver-6f4df6d875-vb7cj                                                   Created container httpd\ndeployment-7737                      63s         Normal    Started                              pod/webserver-6f4df6d875-vb7cj                                                   Started container httpd\ndeployment-7737                      57s         Normal    Killing                              pod/webserver-6f4df6d875-vb7cj                                                   Stopping container httpd\ndeployment-7737                      82s         Normal    SuccessfulCreate                     replicaset/webserver-6f4df6d875                                                  Created pod: webserver-6f4df6d875-dt67w\ndeployment-7737                      82s         Normal    SuccessfulCreate                     replicaset/webserver-6f4df6d875                                                  Created pod: webserver-6f4df6d875-9lfl5\ndeployment-7737                      82s         Normal    SuccessfulCreate                     replicaset/webserver-6f4df6d875                                                  Created pod: webserver-6f4df6d875-fp696\ndeployment-7737                      74s         Normal    SuccessfulDelete                     replicaset/webserver-6f4df6d875                                                  Deleted pod: webserver-6f4df6d875-dt67w\ndeployment-7737                      70s         Normal    SuccessfulCreate                     replicaset/webserver-6f4df6d875                                                  Created pod: webserver-6f4df6d875-vb7cj\ndeployment-7737                      70s         Normal    SuccessfulCreate                     replicaset/webserver-6f4df6d875                                                  Created pod: webserver-6f4df6d875-hljlx\ndeployment-7737                      70s         Normal    SuccessfulCreate                     replicaset/webserver-6f4df6d875                                                  Created pod: webserver-6f4df6d875-kgfvl\ndeployment-7737                      68s         Normal    SuccessfulDelete                     replicaset/webserver-6f4df6d875                                                  Deleted pod: webserver-6f4df6d875-hljlx\ndeployment-7737                      68s         Normal    SuccessfulCreate                     replicaset/webserver-6f4df6d875                                                  Created pod: webserver-6f4df6d875-dmz5n\ndeployment-7737                      60s         Normal    SuccessfulDelete                     replicaset/webserver-6f4df6d875                                                  Deleted pod: webserver-6f4df6d875-dmz5n\ndeployment-7737                      57s         Normal    SuccessfulDelete                     replicaset/webserver-6f4df6d875                                                  Deleted pod: webserver-6f4df6d875-vb7cj\ndeployment-7737                      51s         Normal    SuccessfulCreate                     replicaset/webserver-6f4df6d875                                                  Created pod: webserver-6f4df6d875-pbjds\ndeployment-7737                      42s         Normal    SuccessfulDelete                     replicaset/webserver-6f4df6d875                                                  Deleted pod: webserver-6f4df6d875-pbjds\ndeployment-7737                      33s         Normal    SuccessfulCreate                     replicaset/webserver-6f4df6d875                                                  Created pod: webserver-6f4df6d875-kpr6t\ndeployment-7737                      17s         Normal    SuccessfulDelete                     replicaset/webserver-6f4df6d875                                                  Deleted pod: webserver-6f4df6d875-kpr6t\ndeployment-7737                      91s         Normal    Scheduled                            pod/webserver-79fbcb94c6-4gl99                                                   Successfully assigned deployment-7737/webserver-79fbcb94c6-4gl99 to bootstrap-e2e-minion-group-n0jl\ndeployment-7737                      90s         Normal    Pulled                               pod/webserver-79fbcb94c6-4gl99                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      90s         Normal    Created                              pod/webserver-79fbcb94c6-4gl99                                                   Created container httpd\ndeployment-7737                      87s         Normal    Started                              pod/webserver-79fbcb94c6-4gl99                                                   Started container httpd\ndeployment-7737                      67s         Normal    Scheduled                            pod/webserver-79fbcb94c6-6zz2w                                                   Successfully assigned deployment-7737/webserver-79fbcb94c6-6zz2w to bootstrap-e2e-minion-group-9dh8\ndeployment-7737                      61s         Normal    Pulled                               pod/webserver-79fbcb94c6-6zz2w                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      61s         Normal    Created                              pod/webserver-79fbcb94c6-6zz2w                                                   Created container httpd\ndeployment-7737                      59s         Normal    Started                              pod/webserver-79fbcb94c6-6zz2w                                                   Started container httpd\ndeployment-7737                      43s         Normal    Killing                              pod/webserver-79fbcb94c6-6zz2w                                                   Stopping container httpd\ndeployment-7737                      75s         Normal    Scheduled                            pod/webserver-79fbcb94c6-7dqpf                                                   Successfully assigned deployment-7737/webserver-79fbcb94c6-7dqpf to bootstrap-e2e-minion-group-n0jl\ndeployment-7737                      70s         Normal    Pulled                               pod/webserver-79fbcb94c6-7dqpf                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      69s         Normal    Created                              pod/webserver-79fbcb94c6-7dqpf                                                   Created container httpd\ndeployment-7737                      67s         Warning   Failed                               pod/webserver-79fbcb94c6-7dqpf                                                   Error: failed to start container \"httpd\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:303: getting the final child's pid from pipe caused \\\"EOF\\\"\": unknown\ndeployment-7737                      74s         Normal    Scheduled                            pod/webserver-79fbcb94c6-9c2kh                                                   Successfully assigned deployment-7737/webserver-79fbcb94c6-9c2kh to bootstrap-e2e-minion-group-mnwl\ndeployment-7737                      72s         Normal    Pulled                               pod/webserver-79fbcb94c6-9c2kh                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      72s         Normal    Created                              pod/webserver-79fbcb94c6-9c2kh                                                   Created container httpd\ndeployment-7737                      72s         Normal    Started                              pod/webserver-79fbcb94c6-9c2kh                                                   Started container httpd\ndeployment-7737                      70s         Normal    Killing                              pod/webserver-79fbcb94c6-9c2kh                                                   Stopping container httpd\ndeployment-7737                      84s         Normal    Scheduled                            pod/webserver-79fbcb94c6-dn5p2                                                   Successfully assigned deployment-7737/webserver-79fbcb94c6-dn5p2 to bootstrap-e2e-minion-group-9dh8\ndeployment-7737                      76s         Warning   FailedCreatePodSandBox               pod/webserver-79fbcb94c6-dn5p2                                                   Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod \"webserver-79fbcb94c6-dn5p2\": Error response from daemon: cannot start a stopped process: unknown\ndeployment-7737                      86s         Normal    Scheduled                            pod/webserver-79fbcb94c6-fprhs                                                   Successfully assigned deployment-7737/webserver-79fbcb94c6-fprhs to bootstrap-e2e-minion-group-n0jl\ndeployment-7737                      80s         Warning   FailedCreatePodSandBox               pod/webserver-79fbcb94c6-fprhs                                                   Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod \"webserver-79fbcb94c6-fprhs\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"process_linux.go:413: running prestart hook 0 caused \\\\\\\"error running hook: exit status 1, stdout: , stderr: time=\\\\\\\\\\\\\\\"2020-01-16T07:05:00Z\\\\\\\\\\\\\\\" level=fatal msg=\\\\\\\\\\\\\\\"no such file or directory\\\\\\\\\\\\\\\"\\\\\\\\n\\\\\\\"\\\"\": unknown\ndeployment-7737                      91s         Normal    Scheduled                            pod/webserver-79fbcb94c6-g9l9t                                                   Successfully assigned deployment-7737/webserver-79fbcb94c6-g9l9t to bootstrap-e2e-minion-group-n0jl\ndeployment-7737                      90s         Normal    Pulled                               pod/webserver-79fbcb94c6-g9l9t                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      90s         Normal    Created                              pod/webserver-79fbcb94c6-g9l9t                                                   Created container httpd\ndeployment-7737                      89s         Normal    Started                              pod/webserver-79fbcb94c6-g9l9t                                                   Started container httpd\ndeployment-7737                      86s         Normal    Killing                              pod/webserver-79fbcb94c6-g9l9t                                                   Stopping container httpd\ndeployment-7737                      90s         Normal    Scheduled                            pod/webserver-79fbcb94c6-lnj5m                                                   Successfully assigned deployment-7737/webserver-79fbcb94c6-lnj5m to bootstrap-e2e-minion-group-mnwl\ndeployment-7737                      89s         Warning   FailedMount                          pod/webserver-79fbcb94c6-lnj5m                                                   MountVolume.SetUp failed for volume \"default-token-mpc9q\" : failed to sync secret cache: timed out waiting for the condition\ndeployment-7737                      87s         Normal    Pulled                               pod/webserver-79fbcb94c6-lnj5m                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      87s         Normal    Created                              pod/webserver-79fbcb94c6-lnj5m                                                   Created container httpd\ndeployment-7737                      87s         Normal    Started                              pod/webserver-79fbcb94c6-lnj5m                                                   Started container httpd\ndeployment-7737                      68s         Normal    Killing                              pod/webserver-79fbcb94c6-lnj5m                                                   Stopping container httpd\ndeployment-7737                      88s         Normal    Scheduled                            pod/webserver-79fbcb94c6-pkxh9                                                   Successfully assigned deployment-7737/webserver-79fbcb94c6-pkxh9 to bootstrap-e2e-minion-group-9dh8\ndeployment-7737                      84s         Normal    Pulled                               pod/webserver-79fbcb94c6-pkxh9                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      84s         Normal    Created                              pod/webserver-79fbcb94c6-pkxh9                                                   Created container httpd\ndeployment-7737                      81s         Normal    Started                              pod/webserver-79fbcb94c6-pkxh9                                                   Started container httpd\ndeployment-7737                      58s         Normal    Killing                              pod/webserver-79fbcb94c6-pkxh9                                                   Stopping container httpd\ndeployment-7737                      56s         Normal    Scheduled                            pod/webserver-79fbcb94c6-s48w8                                                   Successfully assigned deployment-7737/webserver-79fbcb94c6-s48w8 to bootstrap-e2e-minion-group-9dh8\ndeployment-7737                      53s         Normal    Pulled                               pod/webserver-79fbcb94c6-s48w8                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      53s         Normal    Created                              pod/webserver-79fbcb94c6-s48w8                                                   Created container httpd\ndeployment-7737                      52s         Normal    Started                              pod/webserver-79fbcb94c6-s48w8                                                   Started container httpd\ndeployment-7737                      50s         Normal    Killing                              pod/webserver-79fbcb94c6-s48w8                                                   Stopping container httpd\ndeployment-7737                      50s         Normal    SandboxChanged                       pod/webserver-79fbcb94c6-s48w8                                                   Pod sandbox changed, it will be killed and re-created.\ndeployment-7737                      48s         Warning   FailedCreatePodSandBox               pod/webserver-79fbcb94c6-s48w8                                                   Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod \"webserver-79fbcb94c6-s48w8\": Error response from daemon: cannot start a stopped process: unknown\ndeployment-7737                      75s         Normal    Scheduled                            pod/webserver-79fbcb94c6-sx7v9                                                   Successfully assigned deployment-7737/webserver-79fbcb94c6-sx7v9 to bootstrap-e2e-minion-group-9dh8\ndeployment-7737                      68s         Warning   FailedCreatePodSandBox               pod/webserver-79fbcb94c6-sx7v9                                                   Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod \"webserver-79fbcb94c6-sx7v9\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"process_linux.go:413: running prestart hook 0 caused \\\\\\\"error running hook: exit status 1, stdout: , stderr: time=\\\\\\\\\\\\\\\"2020-01-16T07:05:13Z\\\\\\\\\\\\\\\" level=fatal msg=\\\\\\\\\\\\\\\"no such file or directory\\\\\\\\\\\\\\\"\\\\\\\\n\\\\\\\"\\\"\": unknown\ndeployment-7737                      61s         Normal    Scheduled                            pod/webserver-79fbcb94c6-xdlmf                                                   Successfully assigned deployment-7737/webserver-79fbcb94c6-xdlmf to bootstrap-e2e-minion-group-mnwl\ndeployment-7737                      60s         Normal    Pulled                               pod/webserver-79fbcb94c6-xdlmf                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      60s         Normal    Created                              pod/webserver-79fbcb94c6-xdlmf                                                   Created container httpd\ndeployment-7737                      60s         Normal    Started                              pod/webserver-79fbcb94c6-xdlmf                                                   Started container httpd\ndeployment-7737                      49s         Normal    Killing                              pod/webserver-79fbcb94c6-xdlmf                                                   Stopping container httpd\ndeployment-7737                      93s         Normal    SuccessfulCreate                     replicaset/webserver-79fbcb94c6                                                  Created pod: webserver-79fbcb94c6-g9l9t\ndeployment-7737                      92s         Normal    SuccessfulCreate                     replicaset/webserver-79fbcb94c6                                                  Created pod: webserver-79fbcb94c6-4gl99\ndeployment-7737                      91s         Normal    SuccessfulCreate                     replicaset/webserver-79fbcb94c6                                                  Created pod: webserver-79fbcb94c6-lnj5m\ndeployment-7737                      89s         Normal    SuccessfulCreate                     replicaset/webserver-79fbcb94c6                                                  Created pod: webserver-79fbcb94c6-pkxh9\ndeployment-7737                      87s         Normal    SuccessfulCreate                     replicaset/webserver-79fbcb94c6                                                  Created pod: webserver-79fbcb94c6-fprhs\ndeployment-7737                      85s         Normal    SuccessfulCreate                     replicaset/webserver-79fbcb94c6                                                  Created pod: webserver-79fbcb94c6-dn5p2\ndeployment-7737                      83s         Normal    SuccessfulDelete                     replicaset/webserver-79fbcb94c6                                                  Deleted pod: webserver-79fbcb94c6-fprhs\ndeployment-7737                      83s         Normal    SuccessfulDelete                     replicaset/webserver-79fbcb94c6                                                  Deleted pod: webserver-79fbcb94c6-dn5p2\ndeployment-7737                      76s         Normal    SuccessfulCreate                     replicaset/webserver-79fbcb94c6                                                  Created pod: webserver-79fbcb94c6-7dqpf\ndeployment-7737                      76s         Normal    SuccessfulCreate                     replicaset/webserver-79fbcb94c6                                                  Created pod: webserver-79fbcb94c6-sx7v9\ndeployment-7737                      75s         Normal    SuccessfulCreate                     replicaset/webserver-79fbcb94c6                                                  Created pod: webserver-79fbcb94c6-9c2kh\ndeployment-7737                      71s         Normal    SuccessfulDelete                     replicaset/webserver-79fbcb94c6                                                  Deleted pod: webserver-79fbcb94c6-sx7v9\ndeployment-7737                      71s         Normal    SuccessfulDelete                     replicaset/webserver-79fbcb94c6                                                  Deleted pod: webserver-79fbcb94c6-7dqpf\ndeployment-7737                      71s         Normal    SuccessfulDelete                     replicaset/webserver-79fbcb94c6                                                  Deleted pod: webserver-79fbcb94c6-9c2kh\ndeployment-7737                      69s         Normal    SuccessfulDelete                     replicaset/webserver-79fbcb94c6                                                  Deleted pod: webserver-79fbcb94c6-lnj5m\ndeployment-7737                      57s         Normal    SuccessfulCreate                     replicaset/webserver-79fbcb94c6                                                  (combined from similar events): Created pod: webserver-79fbcb94c6-s48w8\ndeployment-7737                      50s         Normal    SuccessfulDelete                     replicaset/webserver-79fbcb94c6                                                  Deleted pod: webserver-79fbcb94c6-s48w8\ndeployment-7737                      49s         Normal    SuccessfulDelete                     replicaset/webserver-79fbcb94c6                                                  Deleted pod: webserver-79fbcb94c6-xdlmf\ndeployment-7737                      44s         Normal    SuccessfulDelete                     replicaset/webserver-79fbcb94c6                                                  Deleted pod: webserver-79fbcb94c6-6zz2w\ndeployment-7737                      44s         Normal    Scheduled                            pod/webserver-b44845bb-6csnf                                                     Successfully assigned deployment-7737/webserver-b44845bb-6csnf to bootstrap-e2e-minion-group-9dh8\ndeployment-7737                      37s         Normal    Pulled                               pod/webserver-b44845bb-6csnf                                                     Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      36s         Normal    Created                              pod/webserver-b44845bb-6csnf                                                     Created container httpd\ndeployment-7737                      34s         Normal    Started                              pod/webserver-b44845bb-6csnf                                                     Started container httpd\ndeployment-7737                      34s         Normal    Killing                              pod/webserver-b44845bb-6csnf                                                     Stopping container httpd\ndeployment-7737                      33s         Normal    Scheduled                            pod/webserver-b44845bb-76fxt                                                     Successfully assigned deployment-7737/webserver-b44845bb-76fxt to bootstrap-e2e-minion-group-mnwl\ndeployment-7737                      28s         Normal    Pulled                               pod/webserver-b44845bb-76fxt                                                     Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      27s         Normal    Created                              pod/webserver-b44845bb-76fxt                                                     Created container httpd\ndeployment-7737                      29s         Normal    Started                              pod/webserver-b44845bb-76fxt                                                     Started container httpd\ndeployment-7737                      27s         Warning   Failed                               pod/webserver-b44845bb-76fxt                                                     Error: failed to start container \"httpd\": Error response from daemon: OCI runtime create failed: container_linux.go:338: creating new parent process caused \"container_linux.go:1897: running lstat on namespace path \\\"/proc/96805/ns/ipc\\\" caused \\\"lstat /proc/96805/ns/ipc: no such file or directory\\\"\": unknown\ndeployment-7737                      27s         Normal    Scheduled                            pod/webserver-b44845bb-b969m                                                     Successfully assigned deployment-7737/webserver-b44845bb-b969m to bootstrap-e2e-minion-group-9dh8\ndeployment-7737                      23s         Normal    Pulled                               pod/webserver-b44845bb-b969m                                                     Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      23s         Normal    Created                              pod/webserver-b44845bb-b969m                                                     Created container httpd\ndeployment-7737                      23s         Normal    Started                              pod/webserver-b44845bb-b969m                                                     Started container httpd\ndeployment-7737                      50s         Normal    Scheduled                            pod/webserver-b44845bb-cnq6p                                                     Successfully assigned deployment-7737/webserver-b44845bb-cnq6p to bootstrap-e2e-minion-group-n0jl\ndeployment-7737                      48s         Normal    Pulled                               pod/webserver-b44845bb-cnq6p                                                     Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      48s         Normal    Created                              pod/webserver-b44845bb-cnq6p                                                     Created container httpd\ndeployment-7737                      48s         Normal    Started                              pod/webserver-b44845bb-cnq6p                                                     Started container httpd\ndeployment-7737                      29s         Normal    Killing                              pod/webserver-b44845bb-cnq6p                                                     Stopping container httpd\ndeployment-7737                      49s         Normal    Scheduled                            pod/webserver-b44845bb-frljb                                                     Successfully assigned deployment-7737/webserver-b44845bb-frljb to bootstrap-e2e-minion-group-mnwl\ndeployment-7737                      48s         Normal    Pulled                               pod/webserver-b44845bb-frljb                                                     Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      48s         Normal    Created                              pod/webserver-b44845bb-frljb                                                     Created container httpd\ndeployment-7737                      47s         Normal    Started                              pod/webserver-b44845bb-frljb                                                     Started container httpd\ndeployment-7737                      43s         Normal    Scheduled                            pod/webserver-b44845bb-kgl8d                                                     Successfully assigned deployment-7737/webserver-b44845bb-kgl8d to bootstrap-e2e-minion-group-n0jl\ndeployment-7737                      41s         Normal    Pulled                               pod/webserver-b44845bb-kgl8d                                                     Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      41s         Normal    Created                              pod/webserver-b44845bb-kgl8d                                                     Created container httpd\ndeployment-7737                      41s         Normal    Started                              pod/webserver-b44845bb-kgl8d                                                     Started container httpd\ndeployment-7737                      39s         Normal    Killing                              pod/webserver-b44845bb-kgl8d                                                     Stopping container httpd\ndeployment-7737                      29s         Normal    Scheduled                            pod/webserver-b44845bb-x5ngz                                                     Successfully assigned deployment-7737/webserver-b44845bb-x5ngz to bootstrap-e2e-minion-group-9dh8\ndeployment-7737                      28s         Warning   FailedMount                          pod/webserver-b44845bb-x5ngz                                                     MountVolume.SetUp failed for volume \"default-token-mpc9q\" : failed to sync secret cache: timed out waiting for the condition\ndeployment-7737                      24s         Normal    Pulled                               pod/webserver-b44845bb-x5ngz                                                     Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-7737                      24s         Normal    Created                              pod/webserver-b44845bb-x5ngz                                                     Created container httpd\ndeployment-7737                      23s         Normal    Started                              pod/webserver-b44845bb-x5ngz                                                     Started container httpd\ndeployment-7737                      50s         Normal    SuccessfulCreate                     replicaset/webserver-b44845bb                                                    Created pod: webserver-b44845bb-cnq6p\ndeployment-7737                      50s         Normal    SuccessfulCreate                     replicaset/webserver-b44845bb                                                    Created pod: webserver-b44845bb-frljb\ndeployment-7737                      44s         Normal    SuccessfulCreate                     replicaset/webserver-b44845bb                                                    Created pod: webserver-b44845bb-6csnf\ndeployment-7737                      43s         Normal    SuccessfulCreate                     replicaset/webserver-b44845bb                                                    Created pod: webserver-b44845bb-kgl8d\ndeployment-7737                      39s         Normal    SuccessfulDelete                     replicaset/webserver-b44845bb                                                    Deleted pod: webserver-b44845bb-kgl8d\ndeployment-7737                      34s         Normal    SuccessfulCreate                     replicaset/webserver-b44845bb                                                    Created pod: webserver-b44845bb-76fxt\ndeployment-7737                      29s         Normal    SuccessfulCreate                     replicaset/webserver-b44845bb                                                    Created pod: webserver-b44845bb-x5ngz\ndeployment-7737                      28s         Normal    SuccessfulCreate                     replicaset/webserver-b44845bb                                                    Created pod: webserver-b44845bb-b969m\ndeployment-7737                      102s        Normal    ScalingReplicaSet                    deployment/webserver                                                             Scaled up replica set webserver-595b5b9587 to 6\ndeployment-7737                      96s         Warning   DeploymentRollbackRevisionNotFound   deployment/webserver                                                             Unable to find last revision.\ndeployment-7737                      62s         Normal    ScalingReplicaSet                    deployment/webserver                                                             Scaled up replica set webserver-79fbcb94c6 to 2\ndeployment-7737                      91s         Normal    ScalingReplicaSet                    deployment/webserver                                                             Scaled down replica set webserver-595b5b9587 to 5\ndeployment-7737                      91s         Normal    ScalingReplicaSet                    deployment/webserver                                                             Scaled up replica set webserver-79fbcb94c6 to 3\ndeployment-7737                      87s         Normal    ScalingReplicaSet                    deployment/webserver                                                             Scaled down replica set webserver-595b5b9587 to 4\ndeployment-7737                      76s         Normal    ScalingReplicaSet                    deployment/webserver                                                             Scaled up replica set webserver-79fbcb94c6 to 4\ndeployment-7737                      84s         Normal    ScalingReplicaSet                    deployment/webserver                                                             Scaled down replica set webserver-595b5b9587 to 3\ndeployment-7737                      83s         Normal    ScalingReplicaSet                    deployment/webserver                                                             Scaled down replica set webserver-595b5b9587 to 2\ndeployment-7737                      83s         Normal    ScalingReplicaSet                    deployment/webserver                                                             Scaled down replica set webserver-79fbcb94c6 to 2\ndeployment-7737                      62s         Normal    ScalingReplicaSet                    deployment/webserver                                                             (combined from similar events): Scaled down replica set webserver-6f4df6d875 to 3\ndeployment-7737                      78s         Normal    DeploymentRollback                   deployment/webserver                                                             Rolled back deployment \"webserver\" to revision 2\ndeployment-7737                      72s         Normal    DeploymentRollback                   deployment/webserver                                                             Rolled back deployment \"webserver\" to revision 3\ndeployment-7737                      63s         Normal    DeploymentRollback                   deployment/webserver                                                             Rolled back deployment \"webserver\" to revision 4\ndisruption-4879                      3m13s       Normal    Scheduled                            pod/pod-0                                                                        Successfully assigned disruption-4879/pod-0 to bootstrap-e2e-minion-group-mnwl\ndisruption-4879                      3m12s       Normal    Pulled                               pod/pod-0                                                                        Container image \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\" already present on machine\ndisruption-4879                      3m12s       Normal    Created                              pod/pod-0                                                                        Created container busybox\ndisruption-4879                      3m11s       Normal    Started                              pod/pod-0                                                                        Started container busybox\ndisruption-4879                      3m2s        Normal    Killing                              pod/pod-0                                                                        Stopping container busybox\ndisruption-4879                      3m13s       Normal    Scheduled                            pod/pod-1                                                                        Successfully assigned disruption-4879/pod-1 to bootstrap-e2e-minion-group-9dh8\ndisruption-4879                      3m10s       Normal    Pulled                               pod/pod-1                                                                        Container image \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\" already present on machine\ndisruption-4879                      3m9s        Normal    Created                              pod/pod-1                                                                        Created container busybox\ndisruption-4879                      3m9s        Normal    Started                              pod/pod-1                                                                        Started container busybox\ndisruption-4879                      3m13s       Normal    Scheduled                            pod/pod-2                                                                        Successfully assigned disruption-4879/pod-2 to bootstrap-e2e-minion-group-9dh8\ndisruption-4879                      3m9s        Normal    Pulled                               pod/pod-2                                                                        Container image \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\" already present on machine\ndisruption-4879                      3m9s        Normal    Created                              pod/pod-2                                                                        Created container busybox\ndisruption-4879                      3m8s        Normal    Started                              pod/pod-2                                                                        Started container busybox\ndisruption-5978                      42s         Normal    Scheduled                            pod/pod-0                                                                        Successfully assigned disruption-5978/pod-0 to bootstrap-e2e-minion-group-9dh8\ndisruption-5978                      38s         Normal    Pulled                               pod/pod-0                                                                        Container image \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\" already present on machine\ndisruption-5978                      38s         Normal    Created                              pod/pod-0                                                                        Created container busybox\ndisruption-5978                      35s         Normal    Started                              pod/pod-0                                                                        Started container busybox\ndisruption-5978                      42s         Normal    Scheduled                            pod/pod-1                                                                        Successfully assigned disruption-5978/pod-1 to bootstrap-e2e-minion-group-9dh8\ndisruption-5978                      37s         Normal    Pulled                               pod/pod-1                                                                        Container image \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\" already present on machine\ndisruption-5978                      36s         Normal    Created                              pod/pod-1                                                                        Created container busybox\ndisruption-5978                      35s         Normal    Started                              pod/pod-1                                                                        Started container busybox\ndns-1173                             3m5s        Normal    Scheduled                            pod/dns-test-8904f835-b33b-4bd6-80aa-e7a5120b1fc2                                Successfully assigned dns-1173/dns-test-8904f835-b33b-4bd6-80aa-e7a5120b1fc2 to bootstrap-e2e-minion-group-9dh8\ndns-1173                             3m1s        Normal    Pulled                               pod/dns-test-8904f835-b33b-4bd6-80aa-e7a5120b1fc2                                Container image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\" already present on machine\ndns-1173                             3m1s        Normal    Created                              pod/dns-test-8904f835-b33b-4bd6-80aa-e7a5120b1fc2                                Created container webserver\ndns-1173                             3m          Normal    Started                              pod/dns-test-8904f835-b33b-4bd6-80aa-e7a5120b1fc2                                Started container webserver\ndns-1173                             3m          Normal    Pulled                               pod/dns-test-8904f835-b33b-4bd6-80aa-e7a5120b1fc2                                Container image \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\" already present on machine\ndns-1173                             3m          Normal    Created                              pod/dns-test-8904f835-b33b-4bd6-80aa-e7a5120b1fc2                                Created container querier\ndns-1173                             2m59s       Normal    Started                              pod/dns-test-8904f835-b33b-4bd6-80aa-e7a5120b1fc2                                Started container querier\ndns-1173                             2m59s       Normal    Pulled                               pod/dns-test-8904f835-b33b-4bd6-80aa-e7a5120b1fc2                                Container image \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\" already present on machine\ndns-1173                             2m58s       Normal    Created                              pod/dns-test-8904f835-b33b-4bd6-80aa-e7a5120b1fc2                                Created container jessie-querier\ndns-1173                             2m58s       Normal    Started                              pod/dns-test-8904f835-b33b-4bd6-80aa-e7a5120b1fc2                                Started container jessie-querier\ndns-1173                             2m11s       Normal    Killing                              pod/dns-test-8904f835-b33b-4bd6-80aa-e7a5120b1fc2                                Stopping container webserver\ndns-1173                             2m11s       Normal    Killing                              pod/dns-test-8904f835-b33b-4bd6-80aa-e7a5120b1fc2                                Stopping container jessie-querier\ndns-1173                             2m11s       Normal    Killing                              pod/dns-test-8904f835-b33b-4bd6-80aa-e7a5120b1fc2                                Stopping container querier\ndns-3404                             9s          Normal    Scheduled                            pod/dns-test-09f3785c-b674-45b5-9816-ec29d61075b0                                Successfully assigned dns-3404/dns-test-09f3785c-b674-45b5-9816-ec29d61075b0 to bootstrap-e2e-minion-group-9dh8\ndns-6326                             3m53s       Normal    Scheduled                            pod/dns-test-be5231d9-e0ad-4840-8097-fd7a8383a0d1                                Successfully assigned dns-6326/dns-test-be5231d9-e0ad-4840-8097-fd7a8383a0d1 to bootstrap-e2e-minion-group-mnwl\ndns-6326                             3m50s       Normal    Pulled                               pod/dns-test-be5231d9-e0ad-4840-8097-fd7a8383a0d1                                Container image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\" already present on machine\ndns-6326                             3m50s       Normal    Created                              pod/dns-test-be5231d9-e0ad-4840-8097-fd7a8383a0d1                                Created container webserver\ndns-6326                             3m49s       Normal    Started                              pod/dns-test-be5231d9-e0ad-4840-8097-fd7a8383a0d1                                Started container webserver\ndns-6326                             3m49s       Normal    Pulled                               pod/dns-test-be5231d9-e0ad-4840-8097-fd7a8383a0d1                                Container image \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\" already present on machine\ndns-6326                             3m49s       Normal    Created                              pod/dns-test-be5231d9-e0ad-4840-8097-fd7a8383a0d1                                Created container querier\ndns-6326                             3m49s       Normal    Started                              pod/dns-test-be5231d9-e0ad-4840-8097-fd7a8383a0d1                                Started container querier\ndns-6326                             3m49s       Normal    Pulled                               pod/dns-test-be5231d9-e0ad-4840-8097-fd7a8383a0d1                                Container image \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\" already present on machine\ndns-6326                             3m49s       Normal    Created                              pod/dns-test-be5231d9-e0ad-4840-8097-fd7a8383a0d1                                Created container jessie-querier\ndns-6326                             3m48s       Normal    Started                              pod/dns-test-be5231d9-e0ad-4840-8097-fd7a8383a0d1                                Started container jessie-querier\ndns-6326                             3m44s       Normal    Killing                              pod/dns-test-be5231d9-e0ad-4840-8097-fd7a8383a0d1                                Stopping container webserver\ndns-6326                             3m44s       Normal    Killing                              pod/dns-test-be5231d9-e0ad-4840-8097-fd7a8383a0d1                                Stopping container jessie-querier\ndns-6326                             3m44s       Normal    Killing                              pod/dns-test-be5231d9-e0ad-4840-8097-fd7a8383a0d1                                Stopping container querier\ndns-6326                             4m41s       Normal    Scheduled                            pod/dns-test-c05097dc-59cf-4374-a9db-af3b2385a3a0                                Successfully assigned dns-6326/dns-test-c05097dc-59cf-4374-a9db-af3b2385a3a0 to bootstrap-e2e-minion-group-mnwl\ndns-6326                             4m40s       Warning   FailedMount                          pod/dns-test-c05097dc-59cf-4374-a9db-af3b2385a3a0                                MountVolume.SetUp failed for volume \"default-token-ncnqv\" : failed to sync secret cache: timed out waiting for the condition\ndns-6326                             4m36s       Normal    Pulled                               pod/dns-test-c05097dc-59cf-4374-a9db-af3b2385a3a0                                Container image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\" already present on machine\ndns-6326                             4m34s       Normal    Created                              pod/dns-test-c05097dc-59cf-4374-a9db-af3b2385a3a0                                Created container webserver\ndns-6326                             4m32s       Normal    Started                              pod/dns-test-c05097dc-59cf-4374-a9db-af3b2385a3a0                                Started container webserver\ndns-6326                             4m32s       Normal    Pulled                               pod/dns-test-c05097dc-59cf-4374-a9db-af3b2385a3a0                                Container image \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\" already present on machine\ndns-6326                             4m32s       Normal    Created                              pod/dns-test-c05097dc-59cf-4374-a9db-af3b2385a3a0                                Created container querier\ndns-6326                             4m31s       Normal    Started                              pod/dns-test-c05097dc-59cf-4374-a9db-af3b2385a3a0                                Started container querier\ndns-6326                             4m31s       Normal    Pulled                               pod/dns-test-c05097dc-59cf-4374-a9db-af3b2385a3a0                                Container image \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\" already present on machine\ndns-6326                             4m31s       Normal    Created                              pod/dns-test-c05097dc-59cf-4374-a9db-af3b2385a3a0                                Created container jessie-querier\ndns-6326                             4m30s       Normal    Started                              pod/dns-test-c05097dc-59cf-4374-a9db-af3b2385a3a0                                Started container jessie-querier\ndns-6326                             4m19s       Normal    Killing                              pod/dns-test-c05097dc-59cf-4374-a9db-af3b2385a3a0                                Stopping container webserver\ndns-6326                             4m19s       Normal    Killing                              pod/dns-test-c05097dc-59cf-4374-a9db-af3b2385a3a0                                Stopping container jessie-querier\ndns-6326                             4m19s       Normal    Killing                              pod/dns-test-c05097dc-59cf-4374-a9db-af3b2385a3a0                                Stopping container querier\ndns-6326                             4m17s       Normal    Scheduled                            pod/dns-test-f9d44ed6-485f-4f01-9695-731d5a20cb03                                Successfully assigned dns-6326/dns-test-f9d44ed6-485f-4f01-9695-731d5a20cb03 to bootstrap-e2e-minion-group-mnwl\ndns-6326                             4m16s       Warning   FailedMount                          pod/dns-test-f9d44ed6-485f-4f01-9695-731d5a20cb03                                MountVolume.SetUp failed for volume \"default-token-ncnqv\" : failed to sync secret cache: timed out waiting for the condition\ndns-6326                             4m15s       Normal    Pulled                               pod/dns-test-f9d44ed6-485f-4f01-9695-731d5a20cb03                                Container image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\" already present on machine\ndns-6326                             4m14s       Normal    Created                              pod/dns-test-f9d44ed6-485f-4f01-9695-731d5a20cb03                                Created container webserver\ndns-6326                             4m14s       Normal    Started                              pod/dns-test-f9d44ed6-485f-4f01-9695-731d5a20cb03                                Started container webserver\ndns-6326                             4m14s       Normal    Pulled                               pod/dns-test-f9d44ed6-485f-4f01-9695-731d5a20cb03                                Container image \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\" already present on machine\ndns-6326                             4m14s       Normal    Created                              pod/dns-test-f9d44ed6-485f-4f01-9695-731d5a20cb03                                Created container querier\ndns-6326                             4m14s       Normal    Started                              pod/dns-test-f9d44ed6-485f-4f01-9695-731d5a20cb03                                Started container querier\ndns-6326                             4m14s       Normal    Pulled                               pod/dns-test-f9d44ed6-485f-4f01-9695-731d5a20cb03                                Container image \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\" already present on machine\ndns-6326                             4m14s       Normal    Created                              pod/dns-test-f9d44ed6-485f-4f01-9695-731d5a20cb03                                Created container jessie-querier\ndns-6326                             4m13s       Normal    Started                              pod/dns-test-f9d44ed6-485f-4f01-9695-731d5a20cb03                                Started container jessie-querier\ndns-6326                             3m55s       Normal    Killing                              pod/dns-test-f9d44ed6-485f-4f01-9695-731d5a20cb03                                Stopping container jessie-querier\ndns-6326                             3m55s       Normal    Killing                              pod/dns-test-f9d44ed6-485f-4f01-9695-731d5a20cb03                                Stopping container webserver\ndns-6326                             3m55s       Normal    Killing                              pod/dns-test-f9d44ed6-485f-4f01-9695-731d5a20cb03                                Stopping container querier\ndns-9982                             2m47s       Normal    Scheduled                            pod/dns-test-4c1e4e03-24ec-4789-8cd7-a57e4a05cd55                                Successfully assigned dns-9982/dns-test-4c1e4e03-24ec-4789-8cd7-a57e4a05cd55 to bootstrap-e2e-minion-group-mnwl\ndns-9982                             2m45s       Normal    Pulled                               pod/dns-test-4c1e4e03-24ec-4789-8cd7-a57e4a05cd55                                Container image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\" already present on machine\ndns-9982                             2m45s       Normal    Created                              pod/dns-test-4c1e4e03-24ec-4789-8cd7-a57e4a05cd55                                Created container webserver\ndns-9982                             2m44s       Normal    Started                              pod/dns-test-4c1e4e03-24ec-4789-8cd7-a57e4a05cd55                                Started container webserver\ndns-9982                             2m44s       Normal    Pulled                               pod/dns-test-4c1e4e03-24ec-4789-8cd7-a57e4a05cd55                                Container image \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\" already present on machine\ndns-9982                             2m44s       Normal    Created                              pod/dns-test-4c1e4e03-24ec-4789-8cd7-a57e4a05cd55                                Created container querier\ndns-9982                             2m44s       Normal    Started                              pod/dns-test-4c1e4e03-24ec-4789-8cd7-a57e4a05cd55                                Started container querier\ndns-9982                             2m44s       Normal    Pulled                               pod/dns-test-4c1e4e03-24ec-4789-8cd7-a57e4a05cd55                                Container image \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\" already present on machine\ndns-9982                             2m44s       Normal    Created                              pod/dns-test-4c1e4e03-24ec-4789-8cd7-a57e4a05cd55                                Created container jessie-querier\ndns-9982                             2m44s       Normal    Started                              pod/dns-test-4c1e4e03-24ec-4789-8cd7-a57e4a05cd55                                Started container jessie-querier\ndns-9982                             2m28s       Normal    Killing                              pod/dns-test-4c1e4e03-24ec-4789-8cd7-a57e4a05cd55                                Stopping container webserver\ndns-9982                             2m28s       Normal    Killing                              pod/dns-test-4c1e4e03-24ec-4789-8cd7-a57e4a05cd55                                Stopping container jessie-querier\ndns-9982                             2m28s       Normal    Killing                              pod/dns-test-4c1e4e03-24ec-4789-8cd7-a57e4a05cd55                                Stopping container querier\ndownward-api-2484                    22s         Normal    Scheduled                            pod/downward-api-73234432-e9fb-4b73-a86d-a9416551b84c                            Successfully assigned downward-api-2484/downward-api-73234432-e9fb-4b73-a86d-a9416551b84c to bootstrap-e2e-minion-group-5wcz\ndownward-api-2484                    22s         Normal    Pulled                               pod/downward-api-73234432-e9fb-4b73-a86d-a9416551b84c                            Container image \"docker.io/library/busybox:1.29\" already present on machine\ndownward-api-2484                    22s         Normal    Created                              pod/downward-api-73234432-e9fb-4b73-a86d-a9416551b84c                            Created container dapi-container\ndownward-api-2484                    21s         Normal    Started                              pod/downward-api-73234432-e9fb-4b73-a86d-a9416551b84c                            Started container dapi-container\ndownward-api-3367                    4m3s        Normal    Scheduled                            pod/downwardapi-volume-49cb85b0-4824-4adc-bfa8-54e8a9085fb0                      Successfully assigned downward-api-3367/downwardapi-volume-49cb85b0-4824-4adc-bfa8-54e8a9085fb0 to bootstrap-e2e-minion-group-9dh8\ndownward-api-3367                    4m          Normal    Pulled                               pod/downwardapi-volume-49cb85b0-4824-4adc-bfa8-54e8a9085fb0                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\ndownward-api-3367                    4m          Normal    Created                              pod/downwardapi-volume-49cb85b0-4824-4adc-bfa8-54e8a9085fb0                      Created container client-container\ndownward-api-3367                    3m57s       Normal    Started                              pod/downwardapi-volume-49cb85b0-4824-4adc-bfa8-54e8a9085fb0                      Started container client-container\ndownward-api-6746                    3m3s        Normal    Scheduled                            pod/downwardapi-volume-b2d6e3e9-7d66-4188-81a7-7931ca430bbf                      Successfully assigned downward-api-6746/downwardapi-volume-b2d6e3e9-7d66-4188-81a7-7931ca430bbf to bootstrap-e2e-minion-group-mnwl\ndownward-api-6746                    3m2s        Normal    Pulled                               pod/downwardapi-volume-b2d6e3e9-7d66-4188-81a7-7931ca430bbf                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\ndownward-api-6746                    3m2s        Normal    Created                              pod/downwardapi-volume-b2d6e3e9-7d66-4188-81a7-7931ca430bbf                      Created container client-container\ndownward-api-6746                    3m2s        Normal    Started                              pod/downwardapi-volume-b2d6e3e9-7d66-4188-81a7-7931ca430bbf                      Started container client-container\ndownward-api-6943                    3m34s       Normal    Scheduled                            pod/downward-api-6c60d950-3456-4cb0-af22-a3dd1d05bbea                            Successfully assigned downward-api-6943/downward-api-6c60d950-3456-4cb0-af22-a3dd1d05bbea to bootstrap-e2e-minion-group-9dh8\ndownward-api-6943                    3m28s       Normal    Pulled                               pod/downward-api-6c60d950-3456-4cb0-af22-a3dd1d05bbea                            Container image \"docker.io/library/busybox:1.29\" already present on machine\ndownward-api-6943                    3m27s       Normal    Created                              pod/downward-api-6c60d950-3456-4cb0-af22-a3dd1d05bbea                            Created container dapi-container\ndownward-api-6943                    3m25s       Normal    Started                              pod/downward-api-6c60d950-3456-4cb0-af22-a3dd1d05bbea                            Started container dapi-container\ne2e-kubelet-etc-hosts-768            16s         Normal    Scheduled                            pod/test-host-network-pod                                                        Successfully assigned e2e-kubelet-etc-hosts-768/test-host-network-pod to bootstrap-e2e-minion-group-n0jl\ne2e-kubelet-etc-hosts-768            16s         Normal    Pulled                               pod/test-host-network-pod                                                        Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ne2e-kubelet-etc-hosts-768            15s         Normal    Created                              pod/test-host-network-pod                                                        Created container busybox-1\ne2e-kubelet-etc-hosts-768            14s         Normal    Started                              pod/test-host-network-pod                                                        Started container busybox-1\ne2e-kubelet-etc-hosts-768            14s         Normal    Pulled                               pod/test-host-network-pod                                                        Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ne2e-kubelet-etc-hosts-768            14s         Normal    Created                              pod/test-host-network-pod                                                        Created container busybox-2\ne2e-kubelet-etc-hosts-768            14s         Normal    Started                              pod/test-host-network-pod                                                        Started container busybox-2\ne2e-kubelet-etc-hosts-768            26s         Normal    Scheduled                            pod/test-pod                                                                     Successfully assigned e2e-kubelet-etc-hosts-768/test-pod to bootstrap-e2e-minion-group-mnwl\ne2e-kubelet-etc-hosts-768            23s         Normal    Pulled                               pod/test-pod                                                                     Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ne2e-kubelet-etc-hosts-768            23s         Normal    Created                              pod/test-pod                                                                     Created container busybox-1\ne2e-kubelet-etc-hosts-768            22s         Normal    Started                              pod/test-pod                                                                     Started container busybox-1\ne2e-kubelet-etc-hosts-768            22s         Normal    Pulled                               pod/test-pod                                                                     Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ne2e-kubelet-etc-hosts-768            22s         Normal    Created                              pod/test-pod                                                                     Created container busybox-2\ne2e-kubelet-etc-hosts-768            22s         Normal    Started                              pod/test-pod                                                                     Started container busybox-2\ne2e-kubelet-etc-hosts-768            22s         Normal    Pulled                               pod/test-pod                                                                     Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ne2e-kubelet-etc-hosts-768            21s         Normal    Created                              pod/test-pod                                                                     Created container busybox-3\ne2e-kubelet-etc-hosts-768            21s         Normal    Started                              pod/test-pod                                                                     Started container busybox-3\ne2e-privileged-pod-8651              3m1s        Normal    Scheduled                            pod/privileged-pod                                                               Successfully assigned e2e-privileged-pod-8651/privileged-pod to bootstrap-e2e-minion-group-5wcz\ne2e-privileged-pod-8651              3m          Normal    Pulled                               pod/privileged-pod                                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\ne2e-privileged-pod-8651              3m          Normal    Created                              pod/privileged-pod                                                               Created container privileged-container\ne2e-privileged-pod-8651              3m          Normal    Started                              pod/privileged-pod                                                               Started container privileged-container\ne2e-privileged-pod-8651              3m          Normal    Pulled                               pod/privileged-pod                                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\ne2e-privileged-pod-8651              3m          Normal    Created                              pod/privileged-pod                                                               Created container not-privileged-container\ne2e-privileged-pod-8651              2m59s       Normal    Started                              pod/privileged-pod                                                               Started container not-privileged-container\nemptydir-1365                        4m37s       Normal    Scheduled                            pod/pod-0f940d19-3fc6-4772-8877-00abfc87fc12                                     Successfully assigned emptydir-1365/pod-0f940d19-3fc6-4772-8877-00abfc87fc12 to bootstrap-e2e-minion-group-n0jl\nemptydir-1365                        4m25s       Normal    Pulled                               pod/pod-0f940d19-3fc6-4772-8877-00abfc87fc12                                     Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nemptydir-1365                        4m25s       Normal    Created                              pod/pod-0f940d19-3fc6-4772-8877-00abfc87fc12                                     Created container test-container\nemptydir-1365                        4m21s       Normal    Started                              pod/pod-0f940d19-3fc6-4772-8877-00abfc87fc12                                     Started container test-container\nemptydir-5078                        4m58s       Normal    Scheduled                            pod/pod-c3b7b496-2947-448f-b5db-73f742e23de9                                     Successfully assigned emptydir-5078/pod-c3b7b496-2947-448f-b5db-73f742e23de9 to bootstrap-e2e-minion-group-9dh8\nemptydir-5078                        4m54s       Normal    Pulled                               pod/pod-c3b7b496-2947-448f-b5db-73f742e23de9                                     Container image \"gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0\" already present on machine\nemptydir-5078                        4m54s       Normal    Created                              pod/pod-c3b7b496-2947-448f-b5db-73f742e23de9                                     Created container test-container\nemptydir-5078                        4m53s       Normal    Started                              pod/pod-c3b7b496-2947-448f-b5db-73f742e23de9                                     Started container test-container\nemptydir-7186                        3m6s        Normal    Scheduled                            pod/pod-45b4ea90-cca9-43ac-99ba-c6d632ba17ba                                     Successfully assigned emptydir-7186/pod-45b4ea90-cca9-43ac-99ba-c6d632ba17ba to bootstrap-e2e-minion-group-9dh8\nemptydir-7186                        3m3s        Normal    Pulled                               pod/pod-45b4ea90-cca9-43ac-99ba-c6d632ba17ba                                     Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nemptydir-7186                        3m2s        Normal    Created                              pod/pod-45b4ea90-cca9-43ac-99ba-c6d632ba17ba                                     Created container test-container\nemptydir-7186                        3m1s        Normal    Started                              pod/pod-45b4ea90-cca9-43ac-99ba-c6d632ba17ba                                     Started container test-container\nemptydir-920                         2m13s       Normal    Scheduled                            pod/pod-487a3642-e161-4daf-b38e-d9f8db148509                                     Successfully assigned emptydir-920/pod-487a3642-e161-4daf-b38e-d9f8db148509 to bootstrap-e2e-minion-group-n0jl\nemptydir-920                         2m10s       Normal    Pulled                               pod/pod-487a3642-e161-4daf-b38e-d9f8db148509                                     Container image \"gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0\" already present on machine\nemptydir-920                         2m9s        Normal    Created                              pod/pod-487a3642-e161-4daf-b38e-d9f8db148509                                     Created container test-container\nemptydir-920                         2m9s        Normal    Started                              pod/pod-487a3642-e161-4daf-b38e-d9f8db148509                                     Started container test-container\nflexvolume-7308                      109s        Normal    SuccessfulAttachVolume               pod/flex-client                                                                  AttachVolume.Attach succeeded for volume \"flex-volume-0\"\nflexvolume-7308                      100s        Normal    Pulled                               pod/flex-client                                                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nflexvolume-7308                      100s        Normal    Created                              pod/flex-client                                                                  Created container flex-client\nflexvolume-7308                      100s        Normal    Started                              pod/flex-client                                                                  Started container flex-client\nflexvolume-7308                      93s         Normal    Killing                              pod/flex-client                                                                  Stopping container flex-client\ngc-1557                              3m          Normal    Scheduled                            pod/simpletest.deployment-fb5f5c75d-dp2h2                                        Successfully assigned gc-1557/simpletest.deployment-fb5f5c75d-dp2h2 to bootstrap-e2e-minion-group-n0jl\ngc-1557                              2m59s       Warning   FailedMount                          pod/simpletest.deployment-fb5f5c75d-dp2h2                                        MountVolume.SetUp failed for volume \"default-token-jr74b\" : failed to sync secret cache: timed out waiting for the condition\ngc-1557                              2m57s       Normal    Pulled                               pod/simpletest.deployment-fb5f5c75d-dp2h2                                        Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-1557                              2m57s       Normal    Created                              pod/simpletest.deployment-fb5f5c75d-dp2h2                                        Created container nginx\ngc-1557                              2m57s       Normal    Started                              pod/simpletest.deployment-fb5f5c75d-dp2h2                                        Started container nginx\ngc-1557                              3m          Normal    Scheduled                            pod/simpletest.deployment-fb5f5c75d-wr4dn                                        Successfully assigned gc-1557/simpletest.deployment-fb5f5c75d-wr4dn to bootstrap-e2e-minion-group-mnwl\ngc-1557                              2m59s       Warning   FailedMount                          pod/simpletest.deployment-fb5f5c75d-wr4dn                                        MountVolume.SetUp failed for volume \"default-token-jr74b\" : failed to sync secret cache: timed out waiting for the condition\ngc-1557                              2m56s       Normal    Pulled                               pod/simpletest.deployment-fb5f5c75d-wr4dn                                        Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-1557                              2m56s       Normal    Created                              pod/simpletest.deployment-fb5f5c75d-wr4dn                                        Created container nginx\ngc-1557                              2m55s       Normal    Started                              pod/simpletest.deployment-fb5f5c75d-wr4dn                                        Started container nginx\ngc-1557                              3m          Normal    SuccessfulCreate                     replicaset/simpletest.deployment-fb5f5c75d                                       Created pod: simpletest.deployment-fb5f5c75d-dp2h2\ngc-1557                              3m          Normal    SuccessfulCreate                     replicaset/simpletest.deployment-fb5f5c75d                                       Created pod: simpletest.deployment-fb5f5c75d-wr4dn\ngc-1557                              3m1s        Normal    ScalingReplicaSet                    deployment/simpletest.deployment                                                 Scaled up replica set simpletest.deployment-fb5f5c75d to 2\ngc-2159                              72s         Normal    Scheduled                            pod/simpletest-rc-to-be-deleted-b4kf2                                            Successfully assigned gc-2159/simpletest-rc-to-be-deleted-b4kf2 to bootstrap-e2e-minion-group-mnwl\ngc-2159                              68s         Normal    Pulled                               pod/simpletest-rc-to-be-deleted-b4kf2                                            Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-2159                              68s         Normal    Created                              pod/simpletest-rc-to-be-deleted-b4kf2                                            Created container nginx\ngc-2159                              65s         Normal    Started                              pod/simpletest-rc-to-be-deleted-b4kf2                                            Started container nginx\ngc-2159                              72s         Normal    Scheduled                            pod/simpletest-rc-to-be-deleted-fhc4f                                            Successfully assigned gc-2159/simpletest-rc-to-be-deleted-fhc4f to bootstrap-e2e-minion-group-5wcz\ngc-2159                              65s         Normal    Pulled                               pod/simpletest-rc-to-be-deleted-fhc4f                                            Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-2159                              64s         Normal    Created                              pod/simpletest-rc-to-be-deleted-fhc4f                                            Created container nginx\ngc-2159                              64s         Normal    Started                              pod/simpletest-rc-to-be-deleted-fhc4f                                            Started container nginx\ngc-2159                              72s         Normal    Scheduled                            pod/simpletest-rc-to-be-deleted-fkf6n                                            Successfully assigned gc-2159/simpletest-rc-to-be-deleted-fkf6n to bootstrap-e2e-minion-group-9dh8\ngc-2159                              63s         Normal    Pulled                               pod/simpletest-rc-to-be-deleted-fkf6n                                            Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-2159                              63s         Normal    Created                              pod/simpletest-rc-to-be-deleted-fkf6n                                            Created container nginx\ngc-2159                              61s         Normal    Started                              pod/simpletest-rc-to-be-deleted-fkf6n                                            Started container nginx\ngc-2159                              72s         Normal    Scheduled                            pod/simpletest-rc-to-be-deleted-fvsd9                                            Successfully assigned gc-2159/simpletest-rc-to-be-deleted-fvsd9 to bootstrap-e2e-minion-group-mnwl\ngc-2159                              68s         Normal    Pulled                               pod/simpletest-rc-to-be-deleted-fvsd9                                            Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-2159                              67s         Normal    Created                              pod/simpletest-rc-to-be-deleted-fvsd9                                            Created container nginx\ngc-2159                              65s         Normal    Started                              pod/simpletest-rc-to-be-deleted-fvsd9                                            Started container nginx\ngc-2159                              72s         Normal    Scheduled                            pod/simpletest-rc-to-be-deleted-gtg7x                                            Successfully assigned gc-2159/simpletest-rc-to-be-deleted-gtg7x to bootstrap-e2e-minion-group-n0jl\ngc-2159                              66s         Normal    Pulled                               pod/simpletest-rc-to-be-deleted-gtg7x                                            Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-2159                              66s         Normal    Created                              pod/simpletest-rc-to-be-deleted-gtg7x                                            Created container nginx\ngc-2159                              64s         Normal    Started                              pod/simpletest-rc-to-be-deleted-gtg7x                                            Started container nginx\ngc-2159                              72s         Normal    Scheduled                            pod/simpletest-rc-to-be-deleted-j5jnm                                            Successfully assigned gc-2159/simpletest-rc-to-be-deleted-j5jnm to bootstrap-e2e-minion-group-9dh8\ngc-2159                              64s         Normal    Pulled                               pod/simpletest-rc-to-be-deleted-j5jnm                                            Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-2159                              63s         Normal    Created                              pod/simpletest-rc-to-be-deleted-j5jnm                                            Created container nginx\ngc-2159                              61s         Normal    Started                              pod/simpletest-rc-to-be-deleted-j5jnm                                            Started container nginx\ngc-2159                              58s         Normal    Killing                              pod/simpletest-rc-to-be-deleted-j5jnm                                            Stopping container nginx\ngc-2159                              72s         Normal    Scheduled                            pod/simpletest-rc-to-be-deleted-kgbxj                                            Successfully assigned gc-2159/simpletest-rc-to-be-deleted-kgbxj to bootstrap-e2e-minion-group-9dh8\ngc-2159                              63s         Normal    Pulled                               pod/simpletest-rc-to-be-deleted-kgbxj                                            Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-2159                              62s         Normal    Created                              pod/simpletest-rc-to-be-deleted-kgbxj                                            Created container nginx\ngc-2159                              60s         Normal    Started                              pod/simpletest-rc-to-be-deleted-kgbxj                                            Started container nginx\ngc-2159                              59s         Normal    Killing                              pod/simpletest-rc-to-be-deleted-kgbxj                                            Stopping container nginx\ngc-2159                              72s         Normal    Scheduled                            pod/simpletest-rc-to-be-deleted-lfk5t                                            Successfully assigned gc-2159/simpletest-rc-to-be-deleted-lfk5t to bootstrap-e2e-minion-group-n0jl\ngc-2159                              66s         Normal    Pulled                               pod/simpletest-rc-to-be-deleted-lfk5t                                            Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-2159                              66s         Normal    Created                              pod/simpletest-rc-to-be-deleted-lfk5t                                            Created container nginx\ngc-2159                              64s         Normal    Started                              pod/simpletest-rc-to-be-deleted-lfk5t                                            Started container nginx\ngc-2159                              59s         Normal    Killing                              pod/simpletest-rc-to-be-deleted-lfk5t                                            Stopping container nginx\ngc-2159                              72s         Normal    Scheduled                            pod/simpletest-rc-to-be-deleted-nx8b2                                            Successfully assigned gc-2159/simpletest-rc-to-be-deleted-nx8b2 to bootstrap-e2e-minion-group-mnwl\ngc-2159                              66s         Normal    Pulled                               pod/simpletest-rc-to-be-deleted-nx8b2                                            Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-2159                              66s         Normal    Created                              pod/simpletest-rc-to-be-deleted-nx8b2                                            Created container nginx\ngc-2159                              64s         Normal    Started                              pod/simpletest-rc-to-be-deleted-nx8b2                                            Started container nginx\ngc-2159                              59s         Normal    Killing                              pod/simpletest-rc-to-be-deleted-nx8b2                                            Stopping container nginx\ngc-2159                              72s         Normal    Scheduled                            pod/simpletest-rc-to-be-deleted-xmdwl                                            Successfully assigned gc-2159/simpletest-rc-to-be-deleted-xmdwl to bootstrap-e2e-minion-group-9dh8\ngc-2159                              64s         Normal    Pulled                               pod/simpletest-rc-to-be-deleted-xmdwl                                            Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\ngc-2159                              64s         Normal    Created                              pod/simpletest-rc-to-be-deleted-xmdwl                                            Created container nginx\ngc-2159                              62s         Normal    Started                              pod/simpletest-rc-to-be-deleted-xmdwl                                            Started container nginx\ngc-2159                              58s         Normal    Killing                              pod/simpletest-rc-to-be-deleted-xmdwl                                            Stopping container nginx\ngc-2159                              73s         Normal    SuccessfulCreate                     replicationcontroller/simpletest-rc-to-be-deleted                                Created pod: simpletest-rc-to-be-deleted-gtg7x\ngc-2159                              72s         Normal    SuccessfulCreate                     replicationcontroller/simpletest-rc-to-be-deleted                                Created pod: simpletest-rc-to-be-deleted-fvsd9\ngc-2159                              72s         Normal    SuccessfulCreate                     replicationcontroller/simpletest-rc-to-be-deleted                                Created pod: simpletest-rc-to-be-deleted-fkf6n\ngc-2159                              72s         Normal    SuccessfulCreate                     replicationcontroller/simpletest-rc-to-be-deleted                                Created pod: simpletest-rc-to-be-deleted-j5jnm\ngc-2159                              72s         Normal    SuccessfulCreate                     replicationcontroller/simpletest-rc-to-be-deleted                                Created pod: simpletest-rc-to-be-deleted-fhc4f\ngc-2159                              72s         Normal    SuccessfulCreate                     replicationcontroller/simpletest-rc-to-be-deleted                                Created pod: simpletest-rc-to-be-deleted-kgbxj\ngc-2159                              72s         Normal    SuccessfulCreate                     replicationcontroller/simpletest-rc-to-be-deleted                                Created pod: simpletest-rc-to-be-deleted-xmdwl\ngc-2159                              72s         Normal    SuccessfulCreate                     replicationcontroller/simpletest-rc-to-be-deleted                                Created pod: simpletest-rc-to-be-deleted-nx8b2\ngc-2159                              72s         Normal    SuccessfulCreate                     replicationcontroller/simpletest-rc-to-be-deleted                                Created pod: simpletest-rc-to-be-deleted-lfk5t\ngc-2159                              71s         Normal    SuccessfulCreate                     replicationcontroller/simpletest-rc-to-be-deleted                                (combined from similar events): Created pod: simpletest-rc-to-be-deleted-b4kf2\nhostpath-7207                        117s        Normal    Scheduled                            pod/pod-host-path-test                                                           Successfully assigned hostpath-7207/pod-host-path-test to bootstrap-e2e-minion-group-9dh8\nhostpath-7207                        115s        Normal    Pulled                               pod/pod-host-path-test                                                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nhostpath-7207                        115s        Normal    Created                              pod/pod-host-path-test                                                           Created container test-container-1\nhostpath-7207                        113s        Normal    Started                              pod/pod-host-path-test                                                           Started container test-container-1\nhostpath-7207                        113s        Normal    Pulled                               pod/pod-host-path-test                                                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nhostpath-7207                        113s        Normal    Created                              pod/pod-host-path-test                                                           Created container test-container-2\nhostpath-7207                        112s        Normal    Started                              pod/pod-host-path-test                                                           Started container test-container-2\nhostpath-9363                        4m38s       Normal    Scheduled                            pod/pod-host-path-test                                                           Successfully assigned hostpath-9363/pod-host-path-test to bootstrap-e2e-minion-group-9dh8\nhostpath-9363                        4m30s       Normal    Pulled                               pod/pod-host-path-test                                                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nhostpath-9363                        4m30s       Normal    Created                              pod/pod-host-path-test                                                           Created container test-container-1\nhostpath-9363                        4m27s       Normal    Started                              pod/pod-host-path-test                                                           Started container test-container-1\nhostpath-9363                        4m27s       Normal    Pulled                               pod/pod-host-path-test                                                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nhostpath-9363                        4m27s       Normal    Created                              pod/pod-host-path-test                                                           Created container test-container-2\nhostpath-9363                        4m25s       Normal    Started                              pod/pod-host-path-test                                                           Started container test-container-2\ninit-container-7210                  3m51s       Normal    Scheduled                            pod/pod-init-ced300d7-ca51-4c93-915d-ed22a61ca51c                                Successfully assigned init-container-7210/pod-init-ced300d7-ca51-4c93-915d-ed22a61ca51c to bootstrap-e2e-minion-group-mnwl\ninit-container-7210                  3m49s       Normal    Pulled                               pod/pod-init-ced300d7-ca51-4c93-915d-ed22a61ca51c                                Container image \"docker.io/library/busybox:1.29\" already present on machine\ninit-container-7210                  3m49s       Normal    Created                              pod/pod-init-ced300d7-ca51-4c93-915d-ed22a61ca51c                                Created container init1\ninit-container-7210                  3m49s       Normal    Started                              pod/pod-init-ced300d7-ca51-4c93-915d-ed22a61ca51c                                Started container init1\ninit-container-7210                  3m48s       Normal    Pulled                               pod/pod-init-ced300d7-ca51-4c93-915d-ed22a61ca51c                                Container image \"docker.io/library/busybox:1.29\" already present on machine\ninit-container-7210                  3m47s       Normal    Created                              pod/pod-init-ced300d7-ca51-4c93-915d-ed22a61ca51c                                Created container init2\ninit-container-7210                  3m47s       Normal    Started                              pod/pod-init-ced300d7-ca51-4c93-915d-ed22a61ca51c                                Started container init2\ninit-container-7210                  3m46s       Normal    Pulled                               pod/pod-init-ced300d7-ca51-4c93-915d-ed22a61ca51c                                Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ninit-container-7210                  3m46s       Normal    Created                              pod/pod-init-ced300d7-ca51-4c93-915d-ed22a61ca51c                                Created container run1\ninit-container-7210                  3m46s       Normal    Started                              pod/pod-init-ced300d7-ca51-4c93-915d-ed22a61ca51c                                Started container run1\njob-8075                             3m12s       Normal    Scheduled                            pod/exceed-active-deadline-4cxlk                                                 Successfully assigned job-8075/exceed-active-deadline-4cxlk to bootstrap-e2e-minion-group-mnwl\njob-8075                             3m11s       Normal    Pulled                               pod/exceed-active-deadline-4cxlk                                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-8075                             3m11s       Normal    Created                              pod/exceed-active-deadline-4cxlk                                                 Created container c\njob-8075                             3m10s       Normal    Started                              pod/exceed-active-deadline-4cxlk                                                 Started container c\njob-8075                             3m8s        Normal    Killing                              pod/exceed-active-deadline-4cxlk                                                 Stopping container c\njob-8075                             3m13s       Normal    Scheduled                            pod/exceed-active-deadline-9jb7n                                                 Successfully assigned job-8075/exceed-active-deadline-9jb7n to bootstrap-e2e-minion-group-9dh8\njob-8075                             3m8s        Normal    Pulled                               pod/exceed-active-deadline-9jb7n                                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-8075                             3m8s        Normal    Created                              pod/exceed-active-deadline-9jb7n                                                 Created container c\njob-8075                             3m7s        Normal    Started                              pod/exceed-active-deadline-9jb7n                                                 Started container c\njob-8075                             3m13s       Normal    SuccessfulCreate                     job/exceed-active-deadline                                                       Created pod: exceed-active-deadline-9jb7n\njob-8075                             3m12s       Normal    SuccessfulCreate                     job/exceed-active-deadline                                                       Created pod: exceed-active-deadline-4cxlk\njob-8075                             3m8s        Normal    SuccessfulDelete                     job/exceed-active-deadline                                                       Deleted pod: exceed-active-deadline-4cxlk\njob-8075                             3m8s        Normal    SuccessfulDelete                     job/exceed-active-deadline                                                       Deleted pod: exceed-active-deadline-9jb7n\njob-8075                             3m8s        Warning   DeadlineExceeded                     job/exceed-active-deadline                                                       Job was active longer than specified deadline\nkube-system                          11m         Normal    Scheduled                            pod/coredns-65567c7b57-6nqz2                                                     Successfully assigned kube-system/coredns-65567c7b57-6nqz2 to bootstrap-e2e-minion-group-9dh8\nkube-system                          11m         Warning   FailedMount                          pod/coredns-65567c7b57-6nqz2                                                     MountVolume.SetUp failed for volume \"coredns-token-gfsp7\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          11m         Warning   FailedMount                          pod/coredns-65567c7b57-6nqz2                                                     MountVolume.SetUp failed for volume \"config-volume\" : failed to sync configmap cache: timed out waiting for the condition\nkube-system                          11m         Normal    Pulled                               pod/coredns-65567c7b57-6nqz2                                                     Container image \"k8s.gcr.io/coredns:1.6.5\" already present on machine\nkube-system                          11m         Normal    Created                              pod/coredns-65567c7b57-6nqz2                                                     Created container coredns\nkube-system                          11m         Normal    Started                              pod/coredns-65567c7b57-6nqz2                                                     Started container coredns\nkube-system                          11m         Normal    Killing                              pod/coredns-65567c7b57-6nqz2                                                     Stopping container coredns\nkube-system                          11m         Warning   Unhealthy                            pod/coredns-65567c7b57-6nqz2                                                     Readiness probe failed: Get http://10.64.0.79:8181/ready: dial tcp 10.64.0.79:8181: connect: connection refused\nkube-system                          11m         Normal    Scheduled                            pod/coredns-65567c7b57-ftsdc                                                     Successfully assigned kube-system/coredns-65567c7b57-ftsdc to bootstrap-e2e-minion-group-mnwl\nkube-system                          11m         Normal    Pulling                              pod/coredns-65567c7b57-ftsdc                                                     Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          11m         Normal    Pulled                               pod/coredns-65567c7b57-ftsdc                                                     Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          11m         Normal    Created                              pod/coredns-65567c7b57-ftsdc                                                     Created container coredns\nkube-system                          11m         Normal    Started                              pod/coredns-65567c7b57-ftsdc                                                     Started container coredns\nkube-system                          11m         Normal    Killing                              pod/coredns-65567c7b57-ftsdc                                                     Stopping container coredns\nkube-system                          11m         Warning   Unhealthy                            pod/coredns-65567c7b57-ftsdc                                                     Readiness probe failed: Get http://10.64.2.82:8181/ready: dial tcp 10.64.2.82:8181: connect: connection refused\nkube-system                          11m         Normal    Scheduled                            pod/coredns-65567c7b57-kzcdl                                                     Successfully assigned kube-system/coredns-65567c7b57-kzcdl to bootstrap-e2e-minion-group-mnwl\nkube-system                          11m         Normal    Pulled                               pod/coredns-65567c7b57-kzcdl                                                     Container image \"k8s.gcr.io/coredns:1.6.5\" already present on machine\nkube-system                          11m         Normal    Created                              pod/coredns-65567c7b57-kzcdl                                                     Created container coredns\nkube-system                          11m         Normal    Started                              pod/coredns-65567c7b57-kzcdl                                                     Started container coredns\nkube-system                          11m         Normal    Killing                              pod/coredns-65567c7b57-kzcdl                                                     Stopping container coredns\nkube-system                          11m         Normal    Scheduled                            pod/coredns-65567c7b57-l89mr                                                     Successfully assigned kube-system/coredns-65567c7b57-l89mr to bootstrap-e2e-minion-group-9dh8\nkube-system                          11m         Warning   FailedMount                          pod/coredns-65567c7b57-l89mr                                                     MountVolume.SetUp failed for volume \"config-volume\" : failed to sync configmap cache: timed out waiting for the condition\nkube-system                          11m         Warning   FailedMount                          pod/coredns-65567c7b57-l89mr                                                     MountVolume.SetUp failed for volume \"coredns-token-gfsp7\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          11m         Normal    Pulling                              pod/coredns-65567c7b57-l89mr                                                     Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          11m         Normal    Pulled                               pod/coredns-65567c7b57-l89mr                                                     Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          11m         Normal    Created                              pod/coredns-65567c7b57-l89mr                                                     Created container coredns\nkube-system                          11m         Normal    Started                              pod/coredns-65567c7b57-l89mr                                                     Started container coredns\nkube-system                          11m         Normal    Killing                              pod/coredns-65567c7b57-l89mr                                                     Stopping container coredns\nkube-system                          18m         Normal    Scheduled                            pod/coredns-65567c7b57-nhgsn                                                     Successfully assigned kube-system/coredns-65567c7b57-nhgsn to bootstrap-e2e-minion-group-5wcz\nkube-system                          18m         Normal    Pulling                              pod/coredns-65567c7b57-nhgsn                                                     Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          18m         Normal    Pulled                               pod/coredns-65567c7b57-nhgsn                                                     Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          18m         Normal    Created                              pod/coredns-65567c7b57-nhgsn                                                     Created container coredns\nkube-system                          18m         Normal    Started                              pod/coredns-65567c7b57-nhgsn                                                     Started container coredns\nkube-system                          18m         Warning   FailedScheduling                     pod/coredns-65567c7b57-vfjw5                                                     no nodes available to schedule pods\nkube-system                          18m         Warning   FailedScheduling                     pod/coredns-65567c7b57-vfjw5                                                     0/4 nodes are available: 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          18m         Warning   FailedScheduling                     pod/coredns-65567c7b57-vfjw5                                                     0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          18m         Normal    Scheduled                            pod/coredns-65567c7b57-vfjw5                                                     Successfully assigned kube-system/coredns-65567c7b57-vfjw5 to bootstrap-e2e-minion-group-n0jl\nkube-system                          18m         Normal    Pulling                              pod/coredns-65567c7b57-vfjw5                                                     Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          18m         Normal    Pulled                               pod/coredns-65567c7b57-vfjw5                                                     Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          18m         Normal    Created                              pod/coredns-65567c7b57-vfjw5                                                     Created container coredns\nkube-system                          18m         Normal    Started                              pod/coredns-65567c7b57-vfjw5                                                     Started container coredns\nkube-system                          19m         Warning   FailedCreate                         replicaset/coredns-65567c7b57                                                    Error creating: pods \"coredns-65567c7b57-\" is forbidden: no providers available to validate pod request\nkube-system                          18m         Warning   FailedCreate                         replicaset/coredns-65567c7b57                                                    Error creating: pods \"coredns-65567c7b57-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          18m         Normal    SuccessfulCreate                     replicaset/coredns-65567c7b57                                                    Created pod: coredns-65567c7b57-vfjw5\nkube-system                          18m         Normal    SuccessfulCreate                     replicaset/coredns-65567c7b57                                                    Created pod: coredns-65567c7b57-nhgsn\nkube-system                          11m         Normal    SuccessfulCreate                     replicaset/coredns-65567c7b57                                                    Created pod: coredns-65567c7b57-l89mr\nkube-system                          11m         Normal    SuccessfulCreate                     replicaset/coredns-65567c7b57                                                    Created pod: coredns-65567c7b57-ftsdc\nkube-system                          11m         Normal    SuccessfulDelete                     replicaset/coredns-65567c7b57                                                    Deleted pod: coredns-65567c7b57-l89mr\nkube-system                          11m         Normal    SuccessfulDelete                     replicaset/coredns-65567c7b57                                                    Deleted pod: coredns-65567c7b57-ftsdc\nkube-system                          11m         Normal    SuccessfulCreate                     replicaset/coredns-65567c7b57                                                    Created pod: coredns-65567c7b57-6nqz2\nkube-system                          11m         Normal    SuccessfulCreate                     replicaset/coredns-65567c7b57                                                    Created pod: coredns-65567c7b57-kzcdl\nkube-system                          11m         Normal    SuccessfulDelete                     replicaset/coredns-65567c7b57                                                    Deleted pod: coredns-65567c7b57-6nqz2\nkube-system                          11m         Normal    SuccessfulDelete                     replicaset/coredns-65567c7b57                                                    Deleted pod: coredns-65567c7b57-kzcdl\nkube-system                          19m         Normal    ScalingReplicaSet                    deployment/coredns                                                               Scaled up replica set coredns-65567c7b57 to 1\nkube-system                          18m         Normal    ScalingReplicaSet                    deployment/coredns                                                               Scaled up replica set coredns-65567c7b57 to 2\nkube-system                          11m         Normal    ScalingReplicaSet                    deployment/coredns                                                               Scaled up replica set coredns-65567c7b57 to 4\nkube-system                          11m         Normal    ScalingReplicaSet                    deployment/coredns                                                               Scaled down replica set coredns-65567c7b57 to 3\nkube-system                          11m         Normal    ScalingReplicaSet                    deployment/coredns                                                               Scaled down replica set coredns-65567c7b57 to 2\nkube-system                          18m         Warning   FailedScheduling                     pod/event-exporter-v0.3.1-747b47fcd-ml7vh                                        no nodes available to schedule pods\nkube-system                          18m         Warning   FailedScheduling                     pod/event-exporter-v0.3.1-747b47fcd-ml7vh                                        0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          18m         Normal    Scheduled                            pod/event-exporter-v0.3.1-747b47fcd-ml7vh                                        Successfully assigned kube-system/event-exporter-v0.3.1-747b47fcd-ml7vh to bootstrap-e2e-minion-group-9dh8\nkube-system                          18m         Normal    TaintManagerEviction                 pod/event-exporter-v0.3.1-747b47fcd-ml7vh                                        Cancelling deletion of Pod kube-system/event-exporter-v0.3.1-747b47fcd-ml7vh\nkube-system                          18m         Normal    Pulling                              pod/event-exporter-v0.3.1-747b47fcd-ml7vh                                        Pulling image \"k8s.gcr.io/event-exporter:v0.3.1\"\nkube-system                          18m         Normal    Pulled                               pod/event-exporter-v0.3.1-747b47fcd-ml7vh                                        Successfully pulled image \"k8s.gcr.io/event-exporter:v0.3.1\"\nkube-system                          18m         Normal    Created                              pod/event-exporter-v0.3.1-747b47fcd-ml7vh                                        Created container event-exporter\nkube-system                          18m         Normal    Started                              pod/event-exporter-v0.3.1-747b47fcd-ml7vh                                        Started container event-exporter\nkube-system                          18m         Normal    Pulling                              pod/event-exporter-v0.3.1-747b47fcd-ml7vh                                        Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.7.2\"\nkube-system                          18m         Normal    Pulled                               pod/event-exporter-v0.3.1-747b47fcd-ml7vh                                        Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.7.2\"\nkube-system                          18m         Normal    Created                              pod/event-exporter-v0.3.1-747b47fcd-ml7vh                                        Created container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Started                              pod/event-exporter-v0.3.1-747b47fcd-ml7vh                                        Started container prometheus-to-sd-exporter\nkube-system                          19m         Normal    SuccessfulCreate                     replicaset/event-exporter-v0.3.1-747b47fcd                                       Created pod: event-exporter-v0.3.1-747b47fcd-ml7vh\nkube-system                          19m         Normal    ScalingReplicaSet                    deployment/event-exporter-v0.3.1                                                 Scaled up replica set event-exporter-v0.3.1-747b47fcd to 1\nkube-system                          18m         Warning   FailedScheduling                     pod/fluentd-gcp-scaler-76d9c77b4d-zpv4t                                          no nodes available to schedule pods\nkube-system                          18m         Warning   FailedScheduling                     pod/fluentd-gcp-scaler-76d9c77b4d-zpv4t                                          0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          18m         Normal    Scheduled                            pod/fluentd-gcp-scaler-76d9c77b4d-zpv4t                                          Successfully assigned kube-system/fluentd-gcp-scaler-76d9c77b4d-zpv4t to bootstrap-e2e-minion-group-mnwl\nkube-system                          18m         Normal    Pulling                              pod/fluentd-gcp-scaler-76d9c77b4d-zpv4t                                          Pulling image \"k8s.gcr.io/fluentd-gcp-scaler:0.5.2\"\nkube-system                          18m         Normal    Pulled                               pod/fluentd-gcp-scaler-76d9c77b4d-zpv4t                                          Successfully pulled image \"k8s.gcr.io/fluentd-gcp-scaler:0.5.2\"\nkube-system                          18m         Normal    Created                              pod/fluentd-gcp-scaler-76d9c77b4d-zpv4t                                          Created container fluentd-gcp-scaler\nkube-system                          18m         Normal    Started                              pod/fluentd-gcp-scaler-76d9c77b4d-zpv4t                                          Started container fluentd-gcp-scaler\nkube-system                          18m         Normal    SuccessfulCreate                     replicaset/fluentd-gcp-scaler-76d9c77b4d                                         Created pod: fluentd-gcp-scaler-76d9c77b4d-zpv4t\nkube-system                          18m         Normal    ScalingReplicaSet                    deployment/fluentd-gcp-scaler                                                    Scaled up replica set fluentd-gcp-scaler-76d9c77b4d to 1\nkube-system                          18m         Normal    Scheduled                            pod/fluentd-gcp-v3.2.0-2wknn                                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-2wknn to bootstrap-e2e-minion-group-5wcz\nkube-system                          18m         Normal    Pulling                              pod/fluentd-gcp-v3.2.0-2wknn                                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          18m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-2wknn                                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          18m         Normal    Created                              pod/fluentd-gcp-v3.2.0-2wknn                                                     Created container fluentd-gcp\nkube-system                          18m         Normal    Started                              pod/fluentd-gcp-v3.2.0-2wknn                                                     Started container fluentd-gcp\nkube-system                          18m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-2wknn                                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          18m         Normal    Created                              pod/fluentd-gcp-v3.2.0-2wknn                                                     Created container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Started                              pod/fluentd-gcp-v3.2.0-2wknn                                                     Started container prometheus-to-sd-exporter\nkube-system                          17m         Normal    Killing                              pod/fluentd-gcp-v3.2.0-2wknn                                                     Stopping container fluentd-gcp\nkube-system                          17m         Normal    Killing                              pod/fluentd-gcp-v3.2.0-2wknn                                                     Stopping container prometheus-to-sd-exporter\nkube-system                          17m         Normal    Scheduled                            pod/fluentd-gcp-v3.2.0-4qwt9                                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-4qwt9 to bootstrap-e2e-minion-group-n0jl\nkube-system                          17m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-4qwt9                                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          17m         Normal    Created                              pod/fluentd-gcp-v3.2.0-4qwt9                                                     Created container fluentd-gcp\nkube-system                          17m         Normal    Started                              pod/fluentd-gcp-v3.2.0-4qwt9                                                     Started container fluentd-gcp\nkube-system                          17m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-4qwt9                                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          17m         Normal    Created                              pod/fluentd-gcp-v3.2.0-4qwt9                                                     Created container prometheus-to-sd-exporter\nkube-system                          17m         Normal    Started                              pod/fluentd-gcp-v3.2.0-4qwt9                                                     Started container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Scheduled                            pod/fluentd-gcp-v3.2.0-4stqh                                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-4stqh to bootstrap-e2e-master\nkube-system                          18m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-4stqh                                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          18m         Normal    Created                              pod/fluentd-gcp-v3.2.0-4stqh                                                     Created container fluentd-gcp\nkube-system                          18m         Normal    Started                              pod/fluentd-gcp-v3.2.0-4stqh                                                     Started container fluentd-gcp\nkube-system                          18m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-4stqh                                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          18m         Normal    Created                              pod/fluentd-gcp-v3.2.0-4stqh                                                     Created container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Started                              pod/fluentd-gcp-v3.2.0-4stqh                                                     Started container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Scheduled                            pod/fluentd-gcp-v3.2.0-8n6g2                                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-8n6g2 to bootstrap-e2e-minion-group-n0jl\nkube-system                          18m         Normal    Pulling                              pod/fluentd-gcp-v3.2.0-8n6g2                                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          18m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-8n6g2                                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          18m         Normal    Created                              pod/fluentd-gcp-v3.2.0-8n6g2                                                     Created container fluentd-gcp\nkube-system                          18m         Normal    Started                              pod/fluentd-gcp-v3.2.0-8n6g2                                                     Started container fluentd-gcp\nkube-system                          18m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-8n6g2                                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          18m         Normal    Created                              pod/fluentd-gcp-v3.2.0-8n6g2                                                     Created container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Started                              pod/fluentd-gcp-v3.2.0-8n6g2                                                     Started container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Killing                              pod/fluentd-gcp-v3.2.0-8n6g2                                                     Stopping container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Killing                              pod/fluentd-gcp-v3.2.0-8n6g2                                                     Stopping container fluentd-gcp\nkube-system                          18m         Normal    Scheduled                            pod/fluentd-gcp-v3.2.0-b6xct                                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-b6xct to bootstrap-e2e-minion-group-mnwl\nkube-system                          18m         Warning   FailedMount                          pod/fluentd-gcp-v3.2.0-b6xct                                                     MountVolume.SetUp failed for volume \"config-volume\" : failed to sync configmap cache: timed out waiting for the condition\nkube-system                          18m         Warning   FailedMount                          pod/fluentd-gcp-v3.2.0-b6xct                                                     MountVolume.SetUp failed for volume \"fluentd-gcp-token-vgcct\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          18m         Normal    Pulling                              pod/fluentd-gcp-v3.2.0-b6xct                                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          18m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-b6xct                                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          18m         Normal    Created                              pod/fluentd-gcp-v3.2.0-b6xct                                                     Created container fluentd-gcp\nkube-system                          18m         Normal    Started                              pod/fluentd-gcp-v3.2.0-b6xct                                                     Started container fluentd-gcp\nkube-system                          18m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-b6xct                                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          18m         Normal    Created                              pod/fluentd-gcp-v3.2.0-b6xct                                                     Created container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Started                              pod/fluentd-gcp-v3.2.0-b6xct                                                     Started container prometheus-to-sd-exporter\nkube-system                          17m         Normal    Killing                              pod/fluentd-gcp-v3.2.0-b6xct                                                     Stopping container fluentd-gcp\nkube-system                          17m         Normal    Killing                              pod/fluentd-gcp-v3.2.0-b6xct                                                     Stopping container prometheus-to-sd-exporter\nkube-system                          17m         Normal    Scheduled                            pod/fluentd-gcp-v3.2.0-chktk                                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-chktk to bootstrap-e2e-minion-group-9dh8\nkube-system                          17m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-chktk                                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          17m         Normal    Created                              pod/fluentd-gcp-v3.2.0-chktk                                                     Created container fluentd-gcp\nkube-system                          17m         Normal    Started                              pod/fluentd-gcp-v3.2.0-chktk                                                     Started container fluentd-gcp\nkube-system                          17m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-chktk                                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          17m         Normal    Created                              pod/fluentd-gcp-v3.2.0-chktk                                                     Created container prometheus-to-sd-exporter\nkube-system                          17m         Normal    Started                              pod/fluentd-gcp-v3.2.0-chktk                                                     Started container prometheus-to-sd-exporter\nkube-system                          17m         Normal    Scheduled                            pod/fluentd-gcp-v3.2.0-jbglh                                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-jbglh to bootstrap-e2e-minion-group-mnwl\nkube-system                          17m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-jbglh                                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          17m         Normal    Created                              pod/fluentd-gcp-v3.2.0-jbglh                                                     Created container fluentd-gcp\nkube-system                          17m         Normal    Started                              pod/fluentd-gcp-v3.2.0-jbglh                                                     Started container fluentd-gcp\nkube-system                          17m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-jbglh                                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          17m         Normal    Created                              pod/fluentd-gcp-v3.2.0-jbglh                                                     Created container prometheus-to-sd-exporter\nkube-system                          17m         Normal    Started                              pod/fluentd-gcp-v3.2.0-jbglh                                                     Started container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Scheduled                            pod/fluentd-gcp-v3.2.0-jtgzv                                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-jtgzv to bootstrap-e2e-master\nkube-system                          18m         Normal    Pulling                              pod/fluentd-gcp-v3.2.0-jtgzv                                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          18m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-jtgzv                                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          18m         Normal    Created                              pod/fluentd-gcp-v3.2.0-jtgzv                                                     Created container fluentd-gcp\nkube-system                          18m         Warning   Failed                               pod/fluentd-gcp-v3.2.0-jtgzv                                                     Error: failed to start container \"fluentd-gcp\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:303: getting the final child's pid from pipe caused \\\"read init-p: connection reset by peer\\\"\": unknown\nkube-system                          18m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-jtgzv                                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          18m         Warning   Failed                               pod/fluentd-gcp-v3.2.0-jtgzv                                                     Error: cannot find volume \"fluentd-gcp-token-vgcct\" to mount into container \"prometheus-to-sd-exporter\"\nkube-system                          18m         Normal    Scheduled                            pod/fluentd-gcp-v3.2.0-rkm2k                                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-rkm2k to bootstrap-e2e-minion-group-9dh8\nkube-system                          18m         Normal    Pulling                              pod/fluentd-gcp-v3.2.0-rkm2k                                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          18m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-rkm2k                                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          18m         Normal    Created                              pod/fluentd-gcp-v3.2.0-rkm2k                                                     Created container fluentd-gcp\nkube-system                          18m         Normal    Started                              pod/fluentd-gcp-v3.2.0-rkm2k                                                     Started container fluentd-gcp\nkube-system                          18m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-rkm2k                                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          18m         Normal    Created                              pod/fluentd-gcp-v3.2.0-rkm2k                                                     Created container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Started                              pod/fluentd-gcp-v3.2.0-rkm2k                                                     Started container prometheus-to-sd-exporter\nkube-system                          17m         Normal    Killing                              pod/fluentd-gcp-v3.2.0-rkm2k                                                     Stopping container fluentd-gcp\nkube-system                          17m         Normal    Killing                              pod/fluentd-gcp-v3.2.0-rkm2k                                                     Stopping container prometheus-to-sd-exporter\nkube-system                          17m         Normal    Scheduled                            pod/fluentd-gcp-v3.2.0-vnzbs                                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-vnzbs to bootstrap-e2e-minion-group-5wcz\nkube-system                          17m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-vnzbs                                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          17m         Normal    Created                              pod/fluentd-gcp-v3.2.0-vnzbs                                                     Created container fluentd-gcp\nkube-system                          17m         Normal    Started                              pod/fluentd-gcp-v3.2.0-vnzbs                                                     Started container fluentd-gcp\nkube-system                          17m         Normal    Pulled                               pod/fluentd-gcp-v3.2.0-vnzbs                                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          17m         Normal    Created                              pod/fluentd-gcp-v3.2.0-vnzbs                                                     Created container prometheus-to-sd-exporter\nkube-system                          17m         Normal    Started                              pod/fluentd-gcp-v3.2.0-vnzbs                                                     Started container prometheus-to-sd-exporter\nkube-system                          18m         Normal    SuccessfulCreate                     daemonset/fluentd-gcp-v3.2.0                                                     Created pod: fluentd-gcp-v3.2.0-rkm2k\nkube-system                          18m         Normal    SuccessfulCreate                     daemonset/fluentd-gcp-v3.2.0                                                     Created pod: fluentd-gcp-v3.2.0-2wknn\nkube-system                          18m         Normal    SuccessfulCreate                     daemonset/fluentd-gcp-v3.2.0                                                     Created pod: fluentd-gcp-v3.2.0-b6xct\nkube-system                          18m         Normal    SuccessfulCreate                     daemonset/fluentd-gcp-v3.2.0                                                     Created pod: fluentd-gcp-v3.2.0-8n6g2\nkube-system                          18m         Normal    SuccessfulCreate                     daemonset/fluentd-gcp-v3.2.0                                                     Created pod: fluentd-gcp-v3.2.0-jtgzv\nkube-system                          18m         Normal    SuccessfulDelete                     daemonset/fluentd-gcp-v3.2.0                                                     Deleted pod: fluentd-gcp-v3.2.0-jtgzv\nkube-system                          18m         Normal    SuccessfulCreate                     daemonset/fluentd-gcp-v3.2.0                                                     Created pod: fluentd-gcp-v3.2.0-4stqh\nkube-system                          18m         Normal    SuccessfulDelete                     daemonset/fluentd-gcp-v3.2.0                                                     Deleted pod: fluentd-gcp-v3.2.0-8n6g2\nkube-system                          17m         Normal    SuccessfulCreate                     daemonset/fluentd-gcp-v3.2.0                                                     Created pod: fluentd-gcp-v3.2.0-4qwt9\nkube-system                          17m         Normal    SuccessfulDelete                     daemonset/fluentd-gcp-v3.2.0                                                     Deleted pod: fluentd-gcp-v3.2.0-2wknn\nkube-system                          17m         Normal    SuccessfulCreate                     daemonset/fluentd-gcp-v3.2.0                                                     Created pod: fluentd-gcp-v3.2.0-vnzbs\nkube-system                          17m         Normal    SuccessfulDelete                     daemonset/fluentd-gcp-v3.2.0                                                     Deleted pod: fluentd-gcp-v3.2.0-b6xct\nkube-system                          17m         Normal    SuccessfulCreate                     daemonset/fluentd-gcp-v3.2.0                                                     Created pod: fluentd-gcp-v3.2.0-jbglh\nkube-system                          17m         Normal    SuccessfulDelete                     daemonset/fluentd-gcp-v3.2.0                                                     Deleted pod: fluentd-gcp-v3.2.0-rkm2k\nkube-system                          17m         Normal    SuccessfulCreate                     daemonset/fluentd-gcp-v3.2.0                                                     (combined from similar events): Created pod: fluentd-gcp-v3.2.0-chktk\nkube-system                          18m         Normal    LeaderElection                       configmap/ingress-gce-lock                                                       bootstrap-e2e-master_84de9 became leader\nkube-system                          6m17s       Warning   Unhealthy                            pod/kube-apiserver-bootstrap-e2e-master                                          Readiness probe failed: HTTP probe failed with statuscode: 500\nkube-system                          19m         Normal    LeaderElection                       endpoints/kube-controller-manager                                                bootstrap-e2e-master_201dbe5e-0555-40a4-b9eb-a5b923f91b6d became leader\nkube-system                          19m         Normal    LeaderElection                       lease/kube-controller-manager                                                    bootstrap-e2e-master_201dbe5e-0555-40a4-b9eb-a5b923f91b6d became leader\nkube-system                          18m         Warning   FailedScheduling                     pod/kube-dns-autoscaler-65bc6d4889-8j96k                                         no nodes available to schedule pods\nkube-system                          18m         Warning   FailedScheduling                     pod/kube-dns-autoscaler-65bc6d4889-8j96k                                         0/4 nodes are available: 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          18m         Warning   FailedScheduling                     pod/kube-dns-autoscaler-65bc6d4889-8j96k                                         0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          18m         Normal    Scheduled                            pod/kube-dns-autoscaler-65bc6d4889-8j96k                                         Successfully assigned kube-system/kube-dns-autoscaler-65bc6d4889-8j96k to bootstrap-e2e-minion-group-5wcz\nkube-system                          18m         Normal    Pulling                              pod/kube-dns-autoscaler-65bc6d4889-8j96k                                         Pulling image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\"\nkube-system                          18m         Normal    Pulled                               pod/kube-dns-autoscaler-65bc6d4889-8j96k                                         Successfully pulled image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\"\nkube-system                          18m         Normal    Created                              pod/kube-dns-autoscaler-65bc6d4889-8j96k                                         Created container autoscaler\nkube-system                          18m         Normal    Started                              pod/kube-dns-autoscaler-65bc6d4889-8j96k                                         Started container autoscaler\nkube-system                          11m         Normal    Killing                              pod/kube-dns-autoscaler-65bc6d4889-8j96k                                         Stopping container autoscaler\nkube-system                          11m         Normal    Scheduled                            pod/kube-dns-autoscaler-65bc6d4889-mzf7g                                         Successfully assigned kube-system/kube-dns-autoscaler-65bc6d4889-mzf7g to bootstrap-e2e-minion-group-mnwl\nkube-system                          11m         Normal    Pulling                              pod/kube-dns-autoscaler-65bc6d4889-mzf7g                                         Pulling image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\"\nkube-system                          11m         Normal    Pulled                               pod/kube-dns-autoscaler-65bc6d4889-mzf7g                                         Successfully pulled image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\"\nkube-system                          11m         Normal    Created                              pod/kube-dns-autoscaler-65bc6d4889-mzf7g                                         Created container autoscaler\nkube-system                          11m         Normal    Started                              pod/kube-dns-autoscaler-65bc6d4889-mzf7g                                         Started container autoscaler\nkube-system                          18m         Warning   FailedCreate                         replicaset/kube-dns-autoscaler-65bc6d4889                                        Error creating: pods \"kube-dns-autoscaler-65bc6d4889-\" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount \"kube-dns-autoscaler\" not found\nkube-system                          18m         Normal    SuccessfulCreate                     replicaset/kube-dns-autoscaler-65bc6d4889                                        Created pod: kube-dns-autoscaler-65bc6d4889-8j96k\nkube-system                          11m         Normal    SuccessfulCreate                     replicaset/kube-dns-autoscaler-65bc6d4889                                        Created pod: kube-dns-autoscaler-65bc6d4889-mzf7g\nkube-system                          19m         Normal    ScalingReplicaSet                    deployment/kube-dns-autoscaler                                                   Scaled up replica set kube-dns-autoscaler-65bc6d4889 to 1\nkube-system                          10m         Warning   FailedToUpdateEndpoint               endpoints/kube-dns                                                               Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again\nkube-system                          18m         Normal    Pulled                               pod/kube-proxy-bootstrap-e2e-minion-group-5wcz                                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.810_f437ff75d45517\" already present on machine\nkube-system                          18m         Normal    Created                              pod/kube-proxy-bootstrap-e2e-minion-group-5wcz                                   Created container kube-proxy\nkube-system                          18m         Normal    Started                              pod/kube-proxy-bootstrap-e2e-minion-group-5wcz                                   Started container kube-proxy\nkube-system                          18m         Normal    Pulled                               pod/kube-proxy-bootstrap-e2e-minion-group-9dh8                                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.810_f437ff75d45517\" already present on machine\nkube-system                          18m         Normal    Created                              pod/kube-proxy-bootstrap-e2e-minion-group-9dh8                                   Created container kube-proxy\nkube-system                          18m         Normal    Started                              pod/kube-proxy-bootstrap-e2e-minion-group-9dh8                                   Started container kube-proxy\nkube-system                          18m         Normal    Pulled                               pod/kube-proxy-bootstrap-e2e-minion-group-mnwl                                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.810_f437ff75d45517\" already present on machine\nkube-system                          18m         Normal    Created                              pod/kube-proxy-bootstrap-e2e-minion-group-mnwl                                   Created container kube-proxy\nkube-system                          18m         Normal    Started                              pod/kube-proxy-bootstrap-e2e-minion-group-mnwl                                   Started container kube-proxy\nkube-system                          18m         Normal    Pulled                               pod/kube-proxy-bootstrap-e2e-minion-group-n0jl                                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.810_f437ff75d45517\" already present on machine\nkube-system                          18m         Normal    Created                              pod/kube-proxy-bootstrap-e2e-minion-group-n0jl                                   Created container kube-proxy\nkube-system                          18m         Normal    Started                              pod/kube-proxy-bootstrap-e2e-minion-group-n0jl                                   Started container kube-proxy\nkube-system                          19m         Normal    LeaderElection                       endpoints/kube-scheduler                                                         bootstrap-e2e-master_77b68166-48c2-43e4-bee7-b726e685f478 became leader\nkube-system                          19m         Normal    LeaderElection                       lease/kube-scheduler                                                             bootstrap-e2e-master_77b68166-48c2-43e4-bee7-b726e685f478 became leader\nkube-system                          18m         Warning   FailedScheduling                     pod/kubernetes-dashboard-7778f8b456-dr9n4                                        no nodes available to schedule pods\nkube-system                          18m         Warning   FailedScheduling                     pod/kubernetes-dashboard-7778f8b456-dr9n4                                        0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.\nkube-system                          18m         Warning   FailedScheduling                     pod/kubernetes-dashboard-7778f8b456-dr9n4                                        0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          18m         Normal    Scheduled                            pod/kubernetes-dashboard-7778f8b456-dr9n4                                        Successfully assigned kube-system/kubernetes-dashboard-7778f8b456-dr9n4 to bootstrap-e2e-minion-group-mnwl\nkube-system                          18m         Normal    Pulling                              pod/kubernetes-dashboard-7778f8b456-dr9n4                                        Pulling image \"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\"\nkube-system                          18m         Normal    Pulled                               pod/kubernetes-dashboard-7778f8b456-dr9n4                                        Successfully pulled image \"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\"\nkube-system                          18m         Normal    Created                              pod/kubernetes-dashboard-7778f8b456-dr9n4                                        Created container kubernetes-dashboard\nkube-system                          18m         Normal    Started                              pod/kubernetes-dashboard-7778f8b456-dr9n4                                        Started container kubernetes-dashboard\nkube-system                          18m         Normal    SuccessfulCreate                     replicaset/kubernetes-dashboard-7778f8b456                                       Created pod: kubernetes-dashboard-7778f8b456-dr9n4\nkube-system                          18m         Normal    ScalingReplicaSet                    deployment/kubernetes-dashboard                                                  Scaled up replica set kubernetes-dashboard-7778f8b456 to 1\nkube-system                          18m         Warning   FailedScheduling                     pod/l7-default-backend-678889f899-mzk9g                                          no nodes available to schedule pods\nkube-system                          18m         Warning   FailedScheduling                     pod/l7-default-backend-678889f899-mzk9g                                          0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          18m         Normal    Scheduled                            pod/l7-default-backend-678889f899-mzk9g                                          Successfully assigned kube-system/l7-default-backend-678889f899-mzk9g to bootstrap-e2e-minion-group-5wcz\nkube-system                          18m         Normal    Pulling                              pod/l7-default-backend-678889f899-mzk9g                                          Pulling image \"k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0\"\nkube-system                          18m         Normal    Pulled                               pod/l7-default-backend-678889f899-mzk9g                                          Successfully pulled image \"k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0\"\nkube-system                          18m         Normal    Created                              pod/l7-default-backend-678889f899-mzk9g                                          Created container default-http-backend\nkube-system                          18m         Normal    Started                              pod/l7-default-backend-678889f899-mzk9g                                          Started container default-http-backend\nkube-system                          19m         Warning   FailedCreate                         replicaset/l7-default-backend-678889f899                                         Error creating: pods \"l7-default-backend-678889f899-\" is forbidden: no providers available to validate pod request\nkube-system                          18m         Warning   FailedCreate                         replicaset/l7-default-backend-678889f899                                         Error creating: pods \"l7-default-backend-678889f899-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          18m         Normal    SuccessfulCreate                     replicaset/l7-default-backend-678889f899                                         Created pod: l7-default-backend-678889f899-mzk9g\nkube-system                          19m         Normal    ScalingReplicaSet                    deployment/l7-default-backend                                                    Scaled up replica set l7-default-backend-678889f899 to 1\nkube-system                          18m         Normal    Created                              pod/l7-lb-controller-bootstrap-e2e-master                                        Created container l7-lb-controller\nkube-system                          18m         Normal    Started                              pod/l7-lb-controller-bootstrap-e2e-master                                        Started container l7-lb-controller\nkube-system                          18m         Normal    Pulled                               pod/l7-lb-controller-bootstrap-e2e-master                                        Container image \"k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1\" already present on machine\nkube-system                          18m         Warning   BackOff                              pod/l7-lb-controller-bootstrap-e2e-master                                        Back-off restarting failed container\nkube-system                          18m         Normal    Scheduled                            pod/metadata-proxy-v0.1-8q8nt                                                    Successfully assigned kube-system/metadata-proxy-v0.1-8q8nt to bootstrap-e2e-minion-group-n0jl\nkube-system                          18m         Normal    Pulling                              pod/metadata-proxy-v0.1-8q8nt                                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          18m         Normal    Pulled                               pod/metadata-proxy-v0.1-8q8nt                                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          18m         Normal    Created                              pod/metadata-proxy-v0.1-8q8nt                                                    Created container metadata-proxy\nkube-system                          18m         Normal    Started                              pod/metadata-proxy-v0.1-8q8nt                                                    Started container metadata-proxy\nkube-system                          18m         Normal    Pulling                              pod/metadata-proxy-v0.1-8q8nt                                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          18m         Normal    Pulled                               pod/metadata-proxy-v0.1-8q8nt                                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          18m         Normal    Created                              pod/metadata-proxy-v0.1-8q8nt                                                    Created container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Started                              pod/metadata-proxy-v0.1-8q8nt                                                    Started container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Scheduled                            pod/metadata-proxy-v0.1-d56tj                                                    Successfully assigned kube-system/metadata-proxy-v0.1-d56tj to bootstrap-e2e-minion-group-5wcz\nkube-system                          18m         Normal    Pulling                              pod/metadata-proxy-v0.1-d56tj                                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          18m         Normal    Pulled                               pod/metadata-proxy-v0.1-d56tj                                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          18m         Normal    Created                              pod/metadata-proxy-v0.1-d56tj                                                    Created container metadata-proxy\nkube-system                          18m         Normal    Started                              pod/metadata-proxy-v0.1-d56tj                                                    Started container metadata-proxy\nkube-system                          18m         Normal    Pulling                              pod/metadata-proxy-v0.1-d56tj                                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          18m         Normal    Pulled                               pod/metadata-proxy-v0.1-d56tj                                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          18m         Normal    Created                              pod/metadata-proxy-v0.1-d56tj                                                    Created container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Started                              pod/metadata-proxy-v0.1-d56tj                                                    Started container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Scheduled                            pod/metadata-proxy-v0.1-l84kl                                                    Successfully assigned kube-system/metadata-proxy-v0.1-l84kl to bootstrap-e2e-minion-group-9dh8\nkube-system                          18m         Normal    Pulling                              pod/metadata-proxy-v0.1-l84kl                                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          18m         Normal    Pulled                               pod/metadata-proxy-v0.1-l84kl                                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          18m         Normal    Created                              pod/metadata-proxy-v0.1-l84kl                                                    Created container metadata-proxy\nkube-system                          18m         Normal    Started                              pod/metadata-proxy-v0.1-l84kl                                                    Started container metadata-proxy\nkube-system                          18m         Normal    Pulling                              pod/metadata-proxy-v0.1-l84kl                                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          18m         Normal    Pulled                               pod/metadata-proxy-v0.1-l84kl                                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          18m         Normal    Created                              pod/metadata-proxy-v0.1-l84kl                                                    Created container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Started                              pod/metadata-proxy-v0.1-l84kl                                                    Started container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Scheduled                            pod/metadata-proxy-v0.1-pnxbm                                                    Successfully assigned kube-system/metadata-proxy-v0.1-pnxbm to bootstrap-e2e-master\nkube-system                          18m         Normal    Pulling                              pod/metadata-proxy-v0.1-pnxbm                                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          18m         Normal    Pulled                               pod/metadata-proxy-v0.1-pnxbm                                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          18m         Normal    Created                              pod/metadata-proxy-v0.1-pnxbm                                                    Created container metadata-proxy\nkube-system                          18m         Normal    Started                              pod/metadata-proxy-v0.1-pnxbm                                                    Started container metadata-proxy\nkube-system                          18m         Normal    Pulling                              pod/metadata-proxy-v0.1-pnxbm                                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          18m         Normal    Pulled                               pod/metadata-proxy-v0.1-pnxbm                                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          18m         Normal    Created                              pod/metadata-proxy-v0.1-pnxbm                                                    Created container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Started                              pod/metadata-proxy-v0.1-pnxbm                                                    Started container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Scheduled                            pod/metadata-proxy-v0.1-xvc29                                                    Successfully assigned kube-system/metadata-proxy-v0.1-xvc29 to bootstrap-e2e-minion-group-mnwl\nkube-system                          18m         Warning   FailedMount                          pod/metadata-proxy-v0.1-xvc29                                                    MountVolume.SetUp failed for volume \"metadata-proxy-token-ts8rf\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          18m         Normal    Pulling                              pod/metadata-proxy-v0.1-xvc29                                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          18m         Normal    Pulled                               pod/metadata-proxy-v0.1-xvc29                                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          18m         Normal    Created                              pod/metadata-proxy-v0.1-xvc29                                                    Created container metadata-proxy\nkube-system                          18m         Normal    Started                              pod/metadata-proxy-v0.1-xvc29                                                    Started container metadata-proxy\nkube-system                          18m         Normal    Pulling                              pod/metadata-proxy-v0.1-xvc29                                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          18m         Normal    Pulled                               pod/metadata-proxy-v0.1-xvc29                                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          18m         Normal    Created                              pod/metadata-proxy-v0.1-xvc29                                                    Created container prometheus-to-sd-exporter\nkube-system                          18m         Normal    Started                              pod/metadata-proxy-v0.1-xvc29                                                    Started container prometheus-to-sd-exporter\nkube-system                          18m         Normal    SuccessfulCreate                     daemonset/metadata-proxy-v0.1                                                    Created pod: metadata-proxy-v0.1-l84kl\nkube-system                          18m         Normal    SuccessfulCreate                     daemonset/metadata-proxy-v0.1                                                    Created pod: metadata-proxy-v0.1-d56tj\nkube-system                          18m         Normal    SuccessfulCreate                     daemonset/metadata-proxy-v0.1                                                    Created pod: metadata-proxy-v0.1-xvc29\nkube-system                          18m         Normal    SuccessfulCreate                     daemonset/metadata-proxy-v0.1                                                    Created pod: metadata-proxy-v0.1-8q8nt\nkube-system                          18m         Normal    SuccessfulCreate                     daemonset/metadata-proxy-v0.1                                                    Created pod: metadata-proxy-v0.1-pnxbm\nkube-system                          18m         Normal    Scheduled                            pod/metrics-server-v0.3.6-5f859c87d6-tqlh6                                       Successfully assigned kube-system/metrics-server-v0.3.6-5f859c87d6-tqlh6 to bootstrap-e2e-minion-group-mnwl\nkube-system                          18m         Normal    Pulling                              pod/metrics-server-v0.3.6-5f859c87d6-tqlh6                                       Pulling image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          18m         Normal    Pulled                               pod/metrics-server-v0.3.6-5f859c87d6-tqlh6                                       Successfully pulled image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          18m         Normal    Created                              pod/metrics-server-v0.3.6-5f859c87d6-tqlh6                                       Created container metrics-server\nkube-system                          18m         Normal    Started                              pod/metrics-server-v0.3.6-5f859c87d6-tqlh6                                       Started container metrics-server\nkube-system                          18m         Normal    Pulling                              pod/metrics-server-v0.3.6-5f859c87d6-tqlh6                                       Pulling image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          18m         Normal    Pulled                               pod/metrics-server-v0.3.6-5f859c87d6-tqlh6                                       Successfully pulled image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          18m         Normal    Created                              pod/metrics-server-v0.3.6-5f859c87d6-tqlh6                                       Created container metrics-server-nanny\nkube-system                          18m         Normal    Started                              pod/metrics-server-v0.3.6-5f859c87d6-tqlh6                                       Started container metrics-server-nanny\nkube-system                          18m         Normal    SuccessfulCreate                     replicaset/metrics-server-v0.3.6-5f859c87d6                                      Created pod: metrics-server-v0.3.6-5f859c87d6-tqlh6\nkube-system                          18m         Warning   FailedScheduling                     pod/metrics-server-v0.3.6-65d4dc878-zfkkz                                        no nodes available to schedule pods\nkube-system                          18m         Warning   FailedScheduling                     pod/metrics-server-v0.3.6-65d4dc878-zfkkz                                        0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\nkube-system                          18m         Warning   FailedScheduling                     pod/metrics-server-v0.3.6-65d4dc878-zfkkz                                        0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          18m         Normal    Scheduled                            pod/metrics-server-v0.3.6-65d4dc878-zfkkz                                        Successfully assigned kube-system/metrics-server-v0.3.6-65d4dc878-zfkkz to bootstrap-e2e-minion-group-5wcz\nkube-system                          18m         Normal    Pulling                              pod/metrics-server-v0.3.6-65d4dc878-zfkkz                                        Pulling image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          18m         Normal    Pulled                               pod/metrics-server-v0.3.6-65d4dc878-zfkkz                                        Successfully pulled image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          18m         Normal    Created                              pod/metrics-server-v0.3.6-65d4dc878-zfkkz                                        Created container metrics-server\nkube-system                          18m         Normal    Started                              pod/metrics-server-v0.3.6-65d4dc878-zfkkz                                        Started container metrics-server\nkube-system                          18m         Normal    Pulling                              pod/metrics-server-v0.3.6-65d4dc878-zfkkz                                        Pulling image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          18m         Normal    Pulled                               pod/metrics-server-v0.3.6-65d4dc878-zfkkz                                        Successfully pulled image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          18m         Normal    Created                              pod/metrics-server-v0.3.6-65d4dc878-zfkkz                                        Created container metrics-server-nanny\nkube-system                          18m         Normal    Started                              pod/metrics-server-v0.3.6-65d4dc878-zfkkz                                        Started container metrics-server-nanny\nkube-system                          18m         Normal    Killing                              pod/metrics-server-v0.3.6-65d4dc878-zfkkz                                        Stopping container metrics-server\nkube-system                          18m         Normal    Killing                              pod/metrics-server-v0.3.6-65d4dc878-zfkkz                                        Stopping container metrics-server-nanny\nkube-system                          18m         Warning   FailedCreate                         replicaset/metrics-server-v0.3.6-65d4dc878                                       Error creating: pods \"metrics-server-v0.3.6-65d4dc878-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          18m         Normal    SuccessfulCreate                     replicaset/metrics-server-v0.3.6-65d4dc878                                       Created pod: metrics-server-v0.3.6-65d4dc878-zfkkz\nkube-system                          18m         Normal    SuccessfulDelete                     replicaset/metrics-server-v0.3.6-65d4dc878                                       Deleted pod: metrics-server-v0.3.6-65d4dc878-zfkkz\nkube-system                          18m         Normal    ScalingReplicaSet                    deployment/metrics-server-v0.3.6                                                 Scaled up replica set metrics-server-v0.3.6-65d4dc878 to 1\nkube-system                          18m         Normal    ScalingReplicaSet                    deployment/metrics-server-v0.3.6                                                 Scaled up replica set metrics-server-v0.3.6-5f859c87d6 to 1\nkube-system                          18m         Normal    ScalingReplicaSet                    deployment/metrics-server-v0.3.6                                                 Scaled down replica set metrics-server-v0.3.6-65d4dc878 to 0\nkube-system                          18m         Warning   FailedScheduling                     pod/volume-snapshot-controller-0                                                 no nodes available to schedule pods\nkube-system                          18m         Warning   FailedScheduling                     pod/volume-snapshot-controller-0                                                 0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          18m         Normal    Scheduled                            pod/volume-snapshot-controller-0                                                 Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-n0jl\nkube-system                          18m         Normal    Pulling                              pod/volume-snapshot-controller-0                                                 Pulling image \"quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\"\nkube-system                          18m         Normal    Pulled                               pod/volume-snapshot-controller-0                                                 Successfully pulled image \"quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\"\nkube-system                          18m         Normal    Created                              pod/volume-snapshot-controller-0                                                 Created container volume-snapshot-controller\nkube-system                          18m         Normal    Started                              pod/volume-snapshot-controller-0                                                 Started container volume-snapshot-controller\nkube-system                          18m         Normal    SuccessfulCreate                     statefulset/volume-snapshot-controller                                           create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful\nkubectl-1384                         3m32s       Normal    Scheduled                            pod/agnhost-master-l2hjd                                                         Successfully assigned kubectl-1384/agnhost-master-l2hjd to bootstrap-e2e-minion-group-mnwl\nkubectl-1384                         3m29s       Normal    Pulled                               pod/agnhost-master-l2hjd                                                         Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nkubectl-1384                         3m29s       Normal    Created                              pod/agnhost-master-l2hjd                                                         Created container agnhost-master\nkubectl-1384                         3m28s       Normal    Started                              pod/agnhost-master-l2hjd                                                         Started container agnhost-master\nkubectl-1384                         3m32s       Normal    SuccessfulCreate                     replicationcontroller/agnhost-master                                             Created pod: agnhost-master-l2hjd\nkubectl-1642                         2m15s       Normal    Scheduled                            pod/e2e-test-httpd-rc-h87fz                                                      Successfully assigned kubectl-1642/e2e-test-httpd-rc-h87fz to bootstrap-e2e-minion-group-9dh8\nkubectl-1642                         2m11s       Normal    Pulled                               pod/e2e-test-httpd-rc-h87fz                                                      Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nkubectl-1642                         2m10s       Normal    Created                              pod/e2e-test-httpd-rc-h87fz                                                      Created container e2e-test-httpd-rc\nkubectl-1642                         2m9s        Normal    Started                              pod/e2e-test-httpd-rc-h87fz                                                      Started container e2e-test-httpd-rc\nkubectl-1642                         2m16s       Normal    SuccessfulCreate                     replicationcontroller/e2e-test-httpd-rc                                          Created pod: e2e-test-httpd-rc-h87fz\nkubectl-3606                         7s          Normal    Scheduled                            pod/deployment4q2zjmwrmgm-87fd78899-j6bcc                                        Successfully assigned kubectl-3606/deployment4q2zjmwrmgm-87fd78899-j6bcc to bootstrap-e2e-minion-group-5wcz\nkubectl-3606                         7s          Normal    SuccessfulCreate                     replicaset/deployment4q2zjmwrmgm-87fd78899                                       Created pod: deployment4q2zjmwrmgm-87fd78899-j6bcc\nkubectl-3606                         7s          Normal    ScalingReplicaSet                    deployment/deployment4q2zjmwrmgm                                                 Scaled up replica set deployment4q2zjmwrmgm-87fd78899 to 1\nkubectl-3606                         8s          Normal    Scheduled                            pod/ds6q2zjmwrmgm-2zl4w                                                          Successfully assigned kubectl-3606/ds6q2zjmwrmgm-2zl4w to bootstrap-e2e-minion-group-mnwl\nkubectl-3606                         7s          Warning   FailedMount                          pod/ds6q2zjmwrmgm-2zl4w                                                          MountVolume.SetUp failed for volume \"default-token-xvwfs\" : failed to sync secret cache: timed out waiting for the condition\nkubectl-3606                         8s          Normal    Scheduled                            pod/ds6q2zjmwrmgm-5n2hl                                                          Successfully assigned kubectl-3606/ds6q2zjmwrmgm-5n2hl to bootstrap-e2e-minion-group-5wcz\nkubectl-3606                         8s          Normal    Scheduled                            pod/ds6q2zjmwrmgm-68nxn                                                          Successfully assigned kubectl-3606/ds6q2zjmwrmgm-68nxn to bootstrap-e2e-minion-group-9dh8\nkubectl-3606                         5s          Warning   FailedCreatePodSandBox               pod/ds6q2zjmwrmgm-68nxn                                                          Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod \"ds6q2zjmwrmgm-68nxn\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"process_linux.go:413: running prestart hook 0 caused \\\\\\\"error running hook: exit status 1, stdout: , stderr: time=\\\\\\\\\\\\\\\"2020-01-16T07:06:17Z\\\\\\\\\\\\\\\" level=fatal msg=\\\\\\\\\\\\\\\"no such file or directory\\\\\\\\\\\\\\\"\\\\\\\\n\\\\\\\"\\\"\": unknown\nkubectl-3606                         8s          Normal    Scheduled                            pod/ds6q2zjmwrmgm-7dvhs                                                          Successfully assigned kubectl-3606/ds6q2zjmwrmgm-7dvhs to bootstrap-e2e-minion-group-n0jl\nkubectl-3606                         4s          Warning   FailedCreatePodSandBox               pod/ds6q2zjmwrmgm-7dvhs                                                          Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod \"ds6q2zjmwrmgm-7dvhs\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"process_linux.go:413: running prestart hook 0 caused \\\\\\\"error running hook: exit status 1, stdout: , stderr: time=\\\\\\\\\\\\\\\"2020-01-16T07:06:18Z\\\\\\\\\\\\\\\" level=fatal msg=\\\\\\\\\\\\\\\"no such file or directory\\\\\\\\\\\\\\\"\\\\\\\\n\\\\\\\"\\\"\": unknown\nkubectl-3606                         9s          Normal    SuccessfulCreate                     daemonset/ds6q2zjmwrmgm                                                          Created pod: ds6q2zjmwrmgm-2zl4w\nkubectl-3606                         8s          Normal    SuccessfulCreate                     daemonset/ds6q2zjmwrmgm                                                          Created pod: ds6q2zjmwrmgm-68nxn\nkubectl-3606                         8s          Normal    SuccessfulCreate                     daemonset/ds6q2zjmwrmgm                                                          Created pod: ds6q2zjmwrmgm-7dvhs\nkubectl-3606                         8s          Normal    SuccessfulCreate                     daemonset/ds6q2zjmwrmgm                                                          Created pod: ds6q2zjmwrmgm-5n2hl\nkubectl-3606                         <unknown>             Laziness                                                                                                              some data here\nkubectl-3606                         15s         Normal    ADD                                  ingress/ingress1q2zjmwrmgm                                                       kubectl-3606/ingress1q2zjmwrmgm\nkubectl-3606                         14s         Warning   Translate                            ingress/ingress1q2zjmwrmgm                                                       error while evaluating the ingress spec: could not find service \"kubectl-3606/service\"\nkubectl-3606                         25s         Warning   FailedScheduling                     pod/pod1q2zjmwrmgm                                                               0/5 nodes are available: 1 node(s) were unschedulable, 4 Insufficient cpu.\nkubectl-3606                         24s         Warning   FailedScheduling                     pod/pod1q2zjmwrmgm                                                               skip schedule deleting pod: kubectl-3606/pod1q2zjmwrmgm\nkubectl-3606                         21s         Normal    Scheduled                            pod/rc1q2zjmwrmgm-pcr49                                                          Successfully assigned kubectl-3606/rc1q2zjmwrmgm-pcr49 to bootstrap-e2e-minion-group-5wcz\nkubectl-3606                         17s         Normal    Pulling                              pod/rc1q2zjmwrmgm-pcr49                                                          Pulling image \"fedora:latest\"\nkubectl-3606                         22s         Normal    SuccessfulCreate                     replicationcontroller/rc1q2zjmwrmgm                                              Created pod: rc1q2zjmwrmgm-pcr49\nkubectl-3606                         5s          Normal    Scheduled                            pod/rs3q2zjmwrmgm-f82ml                                                          Successfully assigned kubectl-3606/rs3q2zjmwrmgm-f82ml to bootstrap-e2e-minion-group-9dh8\nkubectl-3606                         6s          Normal    SuccessfulCreate                     replicaset/rs3q2zjmwrmgm                                                         Created pod: rs3q2zjmwrmgm-f82ml\nkubectl-3606                         9s          Warning   FailedCreate                         statefulset/ss3q2zjmwrmgm                                                        create Pod ss3q2zjmwrmgm-0 in StatefulSet ss3q2zjmwrmgm failed error: Pod \"ss3q2zjmwrmgm-0\" is invalid: spec.containers: Required value\nkubectl-4683                         3m53s       Normal    Scheduled                            pod/agnhost-master-bzp59                                                         Successfully assigned kubectl-4683/agnhost-master-bzp59 to bootstrap-e2e-minion-group-5wcz\nkubectl-4683                         3m54s       Normal    Scheduled                            pod/agnhost-master-qzgtx                                                         Successfully assigned kubectl-4683/agnhost-master-qzgtx to bootstrap-e2e-minion-group-9dh8\nkubectl-4683                         3m54s       Normal    SuccessfulCreate                     replicationcontroller/agnhost-master                                             Created pod: agnhost-master-qzgtx\nkubectl-4683                         3m53s       Normal    SuccessfulCreate                     replicationcontroller/agnhost-master                                             Created pod: agnhost-master-bzp59\nkubectl-499                          3m38s       Normal    Scheduled                            pod/agnhost-master-74c46fb7d4-kll62                                              Successfully assigned kubectl-499/agnhost-master-74c46fb7d4-kll62 to bootstrap-e2e-minion-group-9dh8\nkubectl-499                          3m32s       Normal    Pulled                               pod/agnhost-master-74c46fb7d4-kll62                                              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nkubectl-499                          3m32s       Normal    Created                              pod/agnhost-master-74c46fb7d4-kll62                                              Created container master\nkubectl-499                          3m30s       Normal    Started                              pod/agnhost-master-74c46fb7d4-kll62                                              Started container master\nkubectl-499                          3m38s       Normal    SuccessfulCreate                     replicaset/agnhost-master-74c46fb7d4                                             Created pod: agnhost-master-74c46fb7d4-kll62\nkubectl-499                          3m38s       Normal    ScalingReplicaSet                    deployment/agnhost-master                                                        Scaled up replica set agnhost-master-74c46fb7d4 to 1\nkubectl-499                          3m37s       Normal    Scheduled                            pod/agnhost-slave-774cfc759f-9phm6                                               Successfully assigned kubectl-499/agnhost-slave-774cfc759f-9phm6 to bootstrap-e2e-minion-group-9dh8\nkubectl-499                          3m9s        Normal    Pulled                               pod/agnhost-slave-774cfc759f-9phm6                                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nkubectl-499                          3m9s        Normal    Created                              pod/agnhost-slave-774cfc759f-9phm6                                               Created container slave\nkubectl-499                          3m8s        Normal    Started                              pod/agnhost-slave-774cfc759f-9phm6                                               Started container slave\nkubectl-499                          3m37s       Normal    Scheduled                            pod/agnhost-slave-774cfc759f-l7pm8                                               Successfully assigned kubectl-499/agnhost-slave-774cfc759f-l7pm8 to bootstrap-e2e-minion-group-mnwl\nkubectl-499                          3m17s       Normal    Pulled                               pod/agnhost-slave-774cfc759f-l7pm8                                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nkubectl-499                          3m16s       Normal    Created                              pod/agnhost-slave-774cfc759f-l7pm8                                               Created container slave\nkubectl-499                          3m16s       Normal    Started                              pod/agnhost-slave-774cfc759f-l7pm8                                               Started container slave\nkubectl-499                          3m37s       Normal    SuccessfulCreate                     replicaset/agnhost-slave-774cfc759f                                              Created pod: agnhost-slave-774cfc759f-9phm6\nkubectl-499                          3m37s       Normal    SuccessfulCreate                     replicaset/agnhost-slave-774cfc759f                                              Created pod: agnhost-slave-774cfc759f-l7pm8\nkubectl-499                          3m37s       Normal    ScalingReplicaSet                    deployment/agnhost-slave                                                         Scaled up replica set agnhost-slave-774cfc759f to 2\nkubectl-499                          3m39s       Normal    Scheduled                            pod/frontend-6c5f89d5d4-56tmg                                                    Successfully assigned kubectl-499/frontend-6c5f89d5d4-56tmg to bootstrap-e2e-minion-group-9dh8\nkubectl-499                          3m37s       Warning   FailedMount                          pod/frontend-6c5f89d5d4-56tmg                                                    MountVolume.SetUp failed for volume \"default-token-5jvcv\" : failed to sync secret cache: timed out waiting for the condition\nkubectl-499                          3m32s       Normal    Pulled                               pod/frontend-6c5f89d5d4-56tmg                                                    Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nkubectl-499                          3m32s       Normal    Created                              pod/frontend-6c5f89d5d4-56tmg                                                    Created container guestbook-frontend\nkubectl-499                          3m30s       Normal    Started                              pod/frontend-6c5f89d5d4-56tmg                                                    Started container guestbook-frontend\nkubectl-499                          2m46s       Normal    Killing                              pod/frontend-6c5f89d5d4-56tmg                                                    Stopping container guestbook-frontend\nkubectl-499                          3m38s       Normal    Scheduled                            pod/frontend-6c5f89d5d4-jw2jt                                                    Successfully assigned kubectl-499/frontend-6c5f89d5d4-jw2jt to bootstrap-e2e-minion-group-mnwl\nkubectl-499                          3m36s       Normal    Pulled                               pod/frontend-6c5f89d5d4-jw2jt                                                    Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nkubectl-499                          3m36s       Normal    Created                              pod/frontend-6c5f89d5d4-jw2jt                                                    Created container guestbook-frontend\nkubectl-499                          3m36s       Normal    Started                              pod/frontend-6c5f89d5d4-jw2jt                                                    Started container guestbook-frontend\nkubectl-499                          2m46s       Normal    Killing                              pod/frontend-6c5f89d5d4-jw2jt                                                    Stopping container guestbook-frontend\nkubectl-499                          3m38s       Normal    Scheduled                            pod/frontend-6c5f89d5d4-vrf8g                                                    Successfully assigned kubectl-499/frontend-6c5f89d5d4-vrf8g to bootstrap-e2e-minion-group-5wcz\nkubectl-499                          3m36s       Normal    Pulled                               pod/frontend-6c5f89d5d4-vrf8g                                                    Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nkubectl-499                          3m36s       Normal    Created                              pod/frontend-6c5f89d5d4-vrf8g                                                    Created container guestbook-frontend\nkubectl-499                          3m35s       Normal    Started                              pod/frontend-6c5f89d5d4-vrf8g                                                    Started container guestbook-frontend\nkubectl-499                          2m46s       Normal    Killing                              pod/frontend-6c5f89d5d4-vrf8g                                                    Stopping container guestbook-frontend\nkubectl-499                          3m39s       Normal    SuccessfulCreate                     replicaset/frontend-6c5f89d5d4                                                   Created pod: frontend-6c5f89d5d4-56tmg\nkubectl-499                          3m39s       Normal    SuccessfulCreate                     replicaset/frontend-6c5f89d5d4                                                   Created pod: frontend-6c5f89d5d4-jw2jt\nkubectl-499                          3m39s       Normal    SuccessfulCreate                     replicaset/frontend-6c5f89d5d4                                                   Created pod: frontend-6c5f89d5d4-vrf8g\nkubectl-499                          3m40s       Normal    ScalingReplicaSet                    deployment/frontend                                                              Scaled up replica set frontend-6c5f89d5d4 to 3\nkubectl-6770                         28s         Normal    Scheduled                            pod/e2e-test-httpd-rc-5chmd                                                      Successfully assigned kubectl-6770/e2e-test-httpd-rc-5chmd to bootstrap-e2e-minion-group-n0jl\nkubectl-6770                         26s         Normal    Pulled                               pod/e2e-test-httpd-rc-5chmd                                                      Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nkubectl-6770                         26s         Normal    Created                              pod/e2e-test-httpd-rc-5chmd                                                      Created container e2e-test-httpd-rc\nkubectl-6770                         26s         Normal    Started                              pod/e2e-test-httpd-rc-5chmd                                                      Started container e2e-test-httpd-rc\nkubectl-6770                         11s         Normal    Killing                              pod/e2e-test-httpd-rc-5chmd                                                      Stopping container e2e-test-httpd-rc\nkubectl-6770                         22s         Normal    Scheduled                            pod/e2e-test-httpd-rc-b09c6c62784b14c8174e9d912ef2c451-mpn7t                     Successfully assigned kubectl-6770/e2e-test-httpd-rc-b09c6c62784b14c8174e9d912ef2c451-mpn7t to bootstrap-e2e-minion-group-9dh8\nkubectl-6770                         21s         Warning   FailedMount                          pod/e2e-test-httpd-rc-b09c6c62784b14c8174e9d912ef2c451-mpn7t                     MountVolume.SetUp failed for volume \"default-token-rz8rw\" : failed to sync secret cache: timed out waiting for the condition\nkubectl-6770                         17s         Normal    Pulled                               pod/e2e-test-httpd-rc-b09c6c62784b14c8174e9d912ef2c451-mpn7t                     Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nkubectl-6770                         17s         Normal    Created                              pod/e2e-test-httpd-rc-b09c6c62784b14c8174e9d912ef2c451-mpn7t                     Created container e2e-test-httpd-rc\nkubectl-6770                         17s         Normal    Started                              pod/e2e-test-httpd-rc-b09c6c62784b14c8174e9d912ef2c451-mpn7t                     Started container e2e-test-httpd-rc\nkubectl-6770                         23s         Normal    SuccessfulCreate                     replicationcontroller/e2e-test-httpd-rc-b09c6c62784b14c8174e9d912ef2c451         Created pod: e2e-test-httpd-rc-b09c6c62784b14c8174e9d912ef2c451-mpn7t\nkubectl-6770                         29s         Normal    SuccessfulCreate                     replicationcontroller/e2e-test-httpd-rc                                          Created pod: e2e-test-httpd-rc-5chmd\nkubectl-6770                         11s         Normal    SuccessfulDelete                     replicationcontroller/e2e-test-httpd-rc                                          Deleted pod: e2e-test-httpd-rc-5chmd\nkubectl-8814                         33s         Normal    Scheduled                            pod/httpd                                                                        Successfully assigned kubectl-8814/httpd to bootstrap-e2e-minion-group-9dh8\nkubectl-8814                         30s         Normal    Pulled                               pod/httpd                                                                        Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nkubectl-8814                         30s         Normal    Created                              pod/httpd                                                                        Created container httpd\nkubectl-8814                         29s         Normal    Started                              pod/httpd                                                                        Started container httpd\nkubectl-8814                         16s         Normal    Killing                              pod/httpd                                                                        Stopping container httpd\nkubectl-8926                         5m39s       Normal    Scheduled                            pod/update-demo-kitten-bjfbf                                                     Successfully assigned kubectl-8926/update-demo-kitten-bjfbf to bootstrap-e2e-minion-group-mnwl\nkubectl-8926                         5m35s       Normal    Pulling                              pod/update-demo-kitten-bjfbf                                                     Pulling image \"gcr.io/kubernetes-e2e-test-images/kitten:1.0\"\nkubectl-8926                         5m34s       Normal    Pulled                               pod/update-demo-kitten-bjfbf                                                     Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/kitten:1.0\"\nkubectl-8926                         5m34s       Normal    Created                              pod/update-demo-kitten-bjfbf                                                     Created container update-demo\nkubectl-8926                         5m34s       Normal    Started                              pod/update-demo-kitten-bjfbf                                                     Started container update-demo\nkubectl-8926                         5m24s       Normal    Scheduled                            pod/update-demo-kitten-bksjj                                                     Successfully assigned kubectl-8926/update-demo-kitten-bksjj to bootstrap-e2e-minion-group-5wcz\nkubectl-8926                         5m12s       Normal    Pulling                              pod/update-demo-kitten-bksjj                                                     Pulling image \"gcr.io/kubernetes-e2e-test-images/kitten:1.0\"\nkubectl-8926                         5m9s        Normal    Pulled                               pod/update-demo-kitten-bksjj                                                     Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/kitten:1.0\"\nkubectl-8926                         5m9s        Normal    Created                              pod/update-demo-kitten-bksjj                                                     Created container update-demo\nkubectl-8926                         5m4s        Normal    Started                              pod/update-demo-kitten-bksjj                                                     Started container update-demo\nkubectl-8926                         5m39s       Normal    SuccessfulCreate                     replicationcontroller/update-demo-kitten                                         Created pod: update-demo-kitten-bjfbf\nkubectl-8926                         5m24s       Normal    SuccessfulCreate                     replicationcontroller/update-demo-kitten                                         Created pod: update-demo-kitten-bksjj\nkubectl-8926                         6m5s        Normal    Scheduled                            pod/update-demo-nautilus-9hvsn                                                   Successfully assigned kubectl-8926/update-demo-nautilus-9hvsn to bootstrap-e2e-minion-group-n0jl\nkubectl-8926                         6m4s        Warning   FailedMount                          pod/update-demo-nautilus-9hvsn                                                   MountVolume.SetUp failed for volume \"default-token-pknv9\" : failed to sync secret cache: timed out waiting for the condition\nkubectl-8926                         6m2s        Normal    Pulling                              pod/update-demo-nautilus-9hvsn                                                   Pulling image \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\"\nkubectl-8926                         5m59s       Normal    Pulled                               pod/update-demo-nautilus-9hvsn                                                   Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\"\nkubectl-8926                         5m59s       Normal    Created                              pod/update-demo-nautilus-9hvsn                                                   Created container update-demo\nkubectl-8926                         5m58s       Normal    Started                              pod/update-demo-nautilus-9hvsn                                                   Started container update-demo\nkubectl-8926                         5m30s       Normal    Killing                              pod/update-demo-nautilus-9hvsn                                                   Stopping container update-demo\nkubectl-8926                         6m5s        Normal    Scheduled                            pod/update-demo-nautilus-fj8jv                                                   Successfully assigned kubectl-8926/update-demo-nautilus-fj8jv to bootstrap-e2e-minion-group-9dh8\nkubectl-8926                         6m2s        Normal    Pulled                               pod/update-demo-nautilus-fj8jv                                                   Container image \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\" already present on machine\nkubectl-8926                         6m2s        Normal    Created                              pod/update-demo-nautilus-fj8jv                                                   Created container update-demo\nkubectl-8926                         6m1s        Normal    Started                              pod/update-demo-nautilus-fj8jv                                                   Started container update-demo\nkubectl-8926                         4m55s       Normal    Killing                              pod/update-demo-nautilus-fj8jv                                                   Stopping container update-demo\nkubectl-8926                         6m6s        Normal    SuccessfulCreate                     replicationcontroller/update-demo-nautilus                                       Created pod: update-demo-nautilus-fj8jv\nkubectl-8926                         6m6s        Normal    SuccessfulCreate                     replicationcontroller/update-demo-nautilus                                       Created pod: update-demo-nautilus-9hvsn\nkubectl-8926                         5m30s       Normal    SuccessfulDelete                     replicationcontroller/update-demo-nautilus                                       Deleted pod: update-demo-nautilus-9hvsn\nkubectl-8926                         4m55s       Normal    SuccessfulDelete                     replicationcontroller/update-demo-nautilus                                       Deleted pod: update-demo-nautilus-fj8jv\nkubectl-9698                         2m15s       Normal    Scheduled                            pod/pause                                                                        Successfully assigned kubectl-9698/pause to bootstrap-e2e-minion-group-9dh8\nkubectl-9698                         2m9s        Normal    Pulled                               pod/pause                                                                        Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubectl-9698                         2m9s        Normal    Created                              pod/pause                                                                        Created container pause\nkubectl-9698                         2m7s        Normal    Started                              pod/pause                                                                        Started container pause\nkubectl-9698                         2m2s        Normal    Killing                              pod/pause                                                                        Stopping container pause\nkubelet-2383                         5m23s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-27tw9                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-27tw9 to bootstrap-e2e-minion-group-mnwl\nkubelet-2383                         5m15s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-27tw9                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m15s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-27tw9                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m12s       Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-27tw9                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m39s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-27tw9                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m23s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-2k7dk                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-2k7dk to bootstrap-e2e-minion-group-mnwl\nkubelet-2383                         5m14s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-2k7dk                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m14s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-2k7dk                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m12s       Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-2k7dk                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m41s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-2k7dk                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m23s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-2tl25                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-2tl25 to bootstrap-e2e-minion-group-mnwl\nkubelet-2383                         5m15s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-2tl25                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m14s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-2tl25                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m12s       Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-2tl25                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m41s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-2tl25                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-2z9dc                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-2z9dc to bootstrap-e2e-minion-group-n0jl\nkubelet-2383                         5m15s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-2z9dc                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m14s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-2z9dc                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m11s       Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-2z9dc                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m40s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-2z9dc                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m23s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-5j6zl                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-5j6zl to bootstrap-e2e-minion-group-9dh8\nkubelet-2383                         5m7s        Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-5j6zl                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m6s        Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-5j6zl                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m1s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-5j6zl                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m40s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-5j6zl                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-5jddb                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-5jddb to bootstrap-e2e-minion-group-mnwl\nkubelet-2383                         5m17s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-5jddb                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m17s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-5jddb                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m15s       Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-5jddb                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m41s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-5jddb                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m23s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-66wp6                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-66wp6 to bootstrap-e2e-minion-group-5wcz\nkubelet-2383                         5m8s        Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-66wp6                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m8s        Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-66wp6                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m4s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-66wp6                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m39s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-66wp6                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-77hqb                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-77hqb to bootstrap-e2e-minion-group-5wcz\nkubelet-2383                         5m12s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-77hqb                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m11s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-77hqb                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m4s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-77hqb                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m40s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-77hqb                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-95s6n                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-95s6n to bootstrap-e2e-minion-group-n0jl\nkubelet-2383                         5m15s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-95s6n                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m14s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-95s6n                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m11s       Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-95s6n                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m41s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-95s6n                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-9lfzf                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-9lfzf to bootstrap-e2e-minion-group-n0jl\nkubelet-2383                         5m17s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-9lfzf                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m17s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-9lfzf                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m12s       Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-9lfzf                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m39s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-9lfzf                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-9q2rm                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-9q2rm to bootstrap-e2e-minion-group-9dh8\nkubelet-2383                         5m11s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-9q2rm                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m11s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-9q2rm                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m5s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-9q2rm                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m39s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-9q2rm                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-9rdp7                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-9rdp7 to bootstrap-e2e-minion-group-9dh8\nkubelet-2383                         5m9s        Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-9rdp7                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m9s        Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-9rdp7                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m3s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-9rdp7                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m41s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-9rdp7                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-bv2lk                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-bv2lk to bootstrap-e2e-minion-group-n0jl\nkubelet-2383                         5m18s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-bv2lk                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m18s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-bv2lk                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m14s       Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-bv2lk                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m41s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-bv2lk                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-cmgmm                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-cmgmm to bootstrap-e2e-minion-group-5wcz\nkubelet-2383                         5m6s        Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-cmgmm                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m6s        Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-cmgmm                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m3s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-cmgmm                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m41s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-cmgmm                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-cvdlf                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-cvdlf to bootstrap-e2e-minion-group-5wcz\nkubelet-2383                         5m10s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-cvdlf                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m10s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-cvdlf                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m4s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-cvdlf                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m39s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-cvdlf                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m23s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-df7m8                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-df7m8 to bootstrap-e2e-minion-group-mnwl\nkubelet-2383                         5m18s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-df7m8                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m17s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-df7m8                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m15s       Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-df7m8                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m39s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-df7m8                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-fmzkv                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-fmzkv to bootstrap-e2e-minion-group-9dh8\nkubelet-2383                         5m23s       Warning   FailedMount                          pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-fmzkv                         MountVolume.SetUp failed for volume \"default-token-rjw8p\" : failed to sync secret cache: timed out waiting for the condition\nkubelet-2383                         5m10s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-fmzkv                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m9s        Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-fmzkv                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m3s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-fmzkv                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m39s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-fmzkv                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-hcq9d                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-hcq9d to bootstrap-e2e-minion-group-5wcz\nkubelet-2383                         5m10s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-hcq9d                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m10s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-hcq9d                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m4s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-hcq9d                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m39s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-hcq9d                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m23s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-hhzq8                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-hhzq8 to bootstrap-e2e-minion-group-5wcz\nkubelet-2383                         5m8s        Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-hhzq8                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m8s        Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-hhzq8                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m4s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-hhzq8                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m41s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-hhzq8                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-hnrhh                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-hnrhh to bootstrap-e2e-minion-group-5wcz\nkubelet-2383                         5m14s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-hnrhh                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m14s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-hnrhh                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m11s       Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-hnrhh                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m40s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-hnrhh                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m23s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-j75d8                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-j75d8 to bootstrap-e2e-minion-group-5wcz\nkubelet-2383                         5m12s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-j75d8                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m11s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-j75d8                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m4s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-j75d8                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m39s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-j75d8                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m23s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-jgntq                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-jgntq to bootstrap-e2e-minion-group-mnwl\nkubelet-2383                         5m14s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-jgntq                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m13s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-jgntq                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m11s       Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-jgntq                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m39s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-jgntq                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-jvmf4                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-jvmf4 to bootstrap-e2e-minion-group-5wcz\nkubelet-2383                         5m10s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-jvmf4                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m9s        Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-jvmf4                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m4s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-jvmf4                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m41s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-jvmf4                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-k9mhb                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-k9mhb to bootstrap-e2e-minion-group-9dh8\nkubelet-2383                         5m8s        Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-k9mhb                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m8s        Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-k9mhb                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m2s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-k9mhb                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m39s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-k9mhb                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-kgfr6                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-kgfr6 to bootstrap-e2e-minion-group-9dh8\nkubelet-2383                         5m15s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-kgfr6                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m15s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-kgfr6                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m11s       Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-kgfr6                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m39s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-kgfr6                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-mn7hr                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-mn7hr to bootstrap-e2e-minion-group-9dh8\nkubelet-2383                         5m7s        Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-mn7hr                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m6s        Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-mn7hr                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m1s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-mn7hr                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m41s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-mn7hr                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m23s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-mqsrj                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-mqsrj to bootstrap-e2e-minion-group-9dh8\nkubelet-2383                         5m7s        Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-mqsrj                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m6s        Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-mqsrj                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m2s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-mqsrj                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m39s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-mqsrj                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-mzv89                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-mzv89 to bootstrap-e2e-minion-group-mnwl\nkubelet-2383                         5m18s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-mzv89                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m18s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-mzv89                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m15s       Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-mzv89                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m39s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-mzv89                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-n7xbh                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-n7xbh to bootstrap-e2e-minion-group-mnwl\nkubelet-2383                         5m19s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-n7xbh                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m18s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-n7xbh                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m16s       Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-n7xbh                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m41s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-n7xbh                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m23s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-nd9nb                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-nd9nb to bootstrap-e2e-minion-group-5wcz\nkubelet-2383                         5m12s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-nd9nb                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m11s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-nd9nb                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m5s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-nd9nb                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m39s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-nd9nb                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-nlccd                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-nlccd to bootstrap-e2e-minion-group-9dh8\nkubelet-2383                         5m11s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-nlccd                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m11s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-nlccd                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m4s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-nlccd                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m40s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-nlccd                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-s66kf                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-s66kf to bootstrap-e2e-minion-group-5wcz\nkubelet-2383                         5m11s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-s66kf                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m11s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-s66kf                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m4s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-s66kf                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m39s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-s66kf                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m23s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-svcdc                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-svcdc to bootstrap-e2e-minion-group-n0jl\nkubelet-2383                         5m18s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-svcdc                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m18s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-svcdc                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m14s       Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-svcdc                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m39s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-svcdc                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m23s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-tdf9s                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-tdf9s to bootstrap-e2e-minion-group-9dh8\nkubelet-2383                         5m9s        Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-tdf9s                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m9s        Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-tdf9s                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m3s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-tdf9s                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m40s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-tdf9s                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-tmh8p                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-tmh8p to bootstrap-e2e-minion-group-9dh8\nkubelet-2383                         5m8s        Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-tmh8p                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m8s        Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-tmh8p                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m2s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-tmh8p                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m40s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-tmh8p                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-tvhdk                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-tvhdk to bootstrap-e2e-minion-group-9dh8\nkubelet-2383                         5m7s        Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-tvhdk                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m6s        Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-tvhdk                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m2s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-tvhdk                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m39s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-tvhdk                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m24s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-wccz7                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-wccz7 to bootstrap-e2e-minion-group-n0jl\nkubelet-2383                         5m15s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-wccz7                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m14s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-wccz7                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m12s       Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-wccz7                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m41s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-wccz7                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m23s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-wzq98                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-wzq98 to bootstrap-e2e-minion-group-9dh8\nkubelet-2383                         5m7s        Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-wzq98                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m6s        Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-wzq98                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m2s        Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-wzq98                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m40s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-wzq98                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m23s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-zgncl                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-zgncl to bootstrap-e2e-minion-group-mnwl\nkubelet-2383                         5m15s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-zgncl                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m15s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-zgncl                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m12s       Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-zgncl                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m39s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-zgncl                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m25s       Normal    Scheduled                            pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-zlgxt                         Successfully assigned kubelet-2383/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-zlgxt to bootstrap-e2e-minion-group-mnwl\nkubelet-2383                         5m22s       Normal    Pulled                               pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-zlgxt                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubelet-2383                         5m21s       Normal    Created                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-zlgxt                         Created container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m19s       Normal    Started                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-zlgxt                         Started container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         4m42s       Normal    Killing                              pod/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-zlgxt                         Stopping container cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f\nkubelet-2383                         5m26s       Normal    SuccessfulCreate                     replicationcontroller/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f             Created pod: cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-zlgxt\nkubelet-2383                         5m26s       Normal    SuccessfulCreate                     replicationcontroller/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f             Created pod: cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-95s6n\nkubelet-2383                         5m26s       Normal    SuccessfulCreate                     replicationcontroller/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f             Created pod: cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-bv2lk\nkubelet-2383                         5m26s       Normal    SuccessfulCreate                     replicationcontroller/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f             Created pod: cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-mzv89\nkubelet-2383                         5m26s       Normal    SuccessfulCreate                     replicationcontroller/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f             Created pod: cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-fmzkv\nkubelet-2383                         5m26s       Normal    SuccessfulCreate                     replicationcontroller/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f             Created pod: cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-n7xbh\nkubelet-2383                         5m26s       Normal    SuccessfulCreate                     replicationcontroller/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f             Created pod: cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-k9mhb\nkubelet-2383                         5m25s       Normal    SuccessfulCreate                     replicationcontroller/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f             Created pod: cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-wccz7\nkubelet-2383                         5m25s       Normal    SuccessfulCreate                     replicationcontroller/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f             Created pod: cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-s66kf\nkubelet-2383                         5m18s       Normal    SuccessfulCreate                     replicationcontroller/cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f             (combined from similar events): Created pod: cleanup40-0dc8d19c-02f9-47aa-a633-700e7f5d584f-wzq98\nkubelet-test-4199                    4m32s       Normal    Scheduled                            pod/busybox-readonly-fs5d82fcc0-5576-4988-bd25-755110bbfde8                      Successfully assigned kubelet-test-4199/busybox-readonly-fs5d82fcc0-5576-4988-bd25-755110bbfde8 to bootstrap-e2e-minion-group-5wcz\nkubelet-test-4199                    4m28s       Normal    Pulled                               pod/busybox-readonly-fs5d82fcc0-5576-4988-bd25-755110bbfde8                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nkubelet-test-4199                    4m28s       Normal    Created                              pod/busybox-readonly-fs5d82fcc0-5576-4988-bd25-755110bbfde8                      Created container busybox-readonly-fs5d82fcc0-5576-4988-bd25-755110bbfde8\nkubelet-test-4199                    4m26s       Normal    Started                              pod/busybox-readonly-fs5d82fcc0-5576-4988-bd25-755110bbfde8                      Started container busybox-readonly-fs5d82fcc0-5576-4988-bd25-755110bbfde8\nnettest-1801                         3m54s       Normal    Scheduled                            pod/netserver-0                                                                  Successfully assigned nettest-1801/netserver-0 to bootstrap-e2e-minion-group-5wcz\nnettest-1801                         3m51s       Normal    Pulled                               pod/netserver-0                                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-1801                         3m50s       Normal    Created                              pod/netserver-0                                                                  Created container webserver\nnettest-1801                         3m50s       Normal    Started                              pod/netserver-0                                                                  Started container webserver\nnettest-1801                         3m54s       Normal    Scheduled                            pod/netserver-1                                                                  Successfully assigned nettest-1801/netserver-1 to bootstrap-e2e-minion-group-9dh8\nnettest-1801                         3m52s       Normal    Pulled                               pod/netserver-1                                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-1801                         3m51s       Normal    Created                              pod/netserver-1                                                                  Created container webserver\nnettest-1801                         3m51s       Normal    Started                              pod/netserver-1                                                                  Started container webserver\nnettest-1801                         3m53s       Normal    Scheduled                            pod/netserver-2                                                                  Successfully assigned nettest-1801/netserver-2 to bootstrap-e2e-minion-group-mnwl\nnettest-1801                         3m52s       Normal    Pulled                               pod/netserver-2                                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-1801                         3m52s       Normal    Created                              pod/netserver-2                                                                  Created container webserver\nnettest-1801                         3m51s       Normal    Started                              pod/netserver-2                                                                  Started container webserver\nnettest-1801                         3m53s       Normal    Scheduled                            pod/netserver-3                                                                  Successfully assigned nettest-1801/netserver-3 to bootstrap-e2e-minion-group-n0jl\nnettest-1801                         3m47s       Normal    Pulled                               pod/netserver-3                                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-1801                         3m47s       Normal    Created                              pod/netserver-3                                                                  Created container webserver\nnettest-1801                         3m45s       Normal    Started                              pod/netserver-3                                                                  Started container webserver\nnettest-1801                         3m21s       Normal    Scheduled                            pod/test-container-pod                                                           Successfully assigned nettest-1801/test-container-pod to bootstrap-e2e-minion-group-5wcz\nnettest-1801                         3m20s       Normal    Pulled                               pod/test-container-pod                                                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-1801                         3m20s       Normal    Created                              pod/test-container-pod                                                           Created container webserver\nnettest-1801                         3m19s       Normal    Started                              pod/test-container-pod                                                           Started container webserver\nnettest-4067                         4m13s       Normal    Scheduled                            pod/host-test-container-pod                                                      Successfully assigned nettest-4067/host-test-container-pod to bootstrap-e2e-minion-group-5wcz\nnettest-4067                         4m12s       Normal    Pulled                               pod/host-test-container-pod                                                      Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-4067                         4m12s       Normal    Created                              pod/host-test-container-pod                                                      Created container agnhost\nnettest-4067                         4m11s       Normal    Started                              pod/host-test-container-pod                                                      Started container agnhost\nnettest-4067                         4m45s       Normal    Scheduled                            pod/netserver-0                                                                  Successfully assigned nettest-4067/netserver-0 to bootstrap-e2e-minion-group-5wcz\nnettest-4067                         4m42s       Normal    Pulled                               pod/netserver-0                                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-4067                         4m42s       Normal    Created                              pod/netserver-0                                                                  Created container webserver\nnettest-4067                         4m41s       Normal    Started                              pod/netserver-0                                                                  Started container webserver\nnettest-4067                         4m44s       Normal    Scheduled                            pod/netserver-1                                                                  Successfully assigned nettest-4067/netserver-1 to bootstrap-e2e-minion-group-9dh8\nnettest-4067                         4m42s       Normal    Pulled                               pod/netserver-1                                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-4067                         4m41s       Normal    Created                              pod/netserver-1                                                                  Created container webserver\nnettest-4067                         4m39s       Normal    Started                              pod/netserver-1                                                                  Started container webserver\nnettest-4067                         4m44s       Normal    Scheduled                            pod/netserver-2                                                                  Successfully assigned nettest-4067/netserver-2 to bootstrap-e2e-minion-group-mnwl\nnettest-4067                         4m42s       Normal    Pulled                               pod/netserver-2                                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-4067                         4m42s       Normal    Created                              pod/netserver-2                                                                  Created container webserver\nnettest-4067                         4m42s       Normal    Started                              pod/netserver-2                                                                  Started container webserver\nnettest-4067                         4m44s       Normal    Scheduled                            pod/netserver-3                                                                  Successfully assigned nettest-4067/netserver-3 to bootstrap-e2e-minion-group-n0jl\nnettest-4067                         4m39s       Normal    Pulled                               pod/netserver-3                                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-4067                         4m39s       Normal    Created                              pod/netserver-3                                                                  Created container webserver\nnettest-4067                         4m34s       Normal    Started                              pod/netserver-3                                                                  Started container webserver\nnettest-4067                         4m13s       Normal    Scheduled                            pod/test-container-pod                                                           Successfully assigned nettest-4067/test-container-pod to bootstrap-e2e-minion-group-mnwl\nnettest-4067                         4m12s       Normal    Pulled                               pod/test-container-pod                                                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-4067                         4m12s       Normal    Created                              pod/test-container-pod                                                           Created container webserver\nnettest-4067                         4m12s       Normal    Started                              pod/test-container-pod                                                           Started container webserver\nnettest-6829                         8m12s       Normal    Scheduled                            pod/netserver-0                                                                  Successfully assigned nettest-6829/netserver-0 to bootstrap-e2e-minion-group-5wcz\nnettest-6829                         8m9s        Normal    Pulled                               pod/netserver-0                                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-6829                         8m9s        Normal    Created                              pod/netserver-0                                                                  Created container webserver\nnettest-6829                         8m8s        Normal    Started                              pod/netserver-0                                                                  Started container webserver\nnettest-6829                         7m          Normal    Killing                              pod/netserver-0                                                                  Stopping container webserver\nnettest-6829                         8m12s       Normal    Scheduled                            pod/netserver-1                                                                  Successfully assigned nettest-6829/netserver-1 to bootstrap-e2e-minion-group-9dh8\nnettest-6829                         8m7s        Normal    Pulled                               pod/netserver-1                                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-6829                         8m7s        Normal    Created                              pod/netserver-1                                                                  Created container webserver\nnettest-6829                         8m5s        Normal    Started                              pod/netserver-1                                                                  Started container webserver\nnettest-6829                         8m12s       Normal    Scheduled                            pod/netserver-2                                                                  Successfully assigned nettest-6829/netserver-2 to bootstrap-e2e-minion-group-mnwl\nnettest-6829                         8m10s       Normal    Pulled                               pod/netserver-2                                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-6829                         8m10s       Normal    Created                              pod/netserver-2                                                                  Created container webserver\nnettest-6829                         8m9s        Normal    Started                              pod/netserver-2                                                                  Started container webserver\nnettest-6829                         8m12s       Normal    Scheduled                            pod/netserver-3                                                                  Successfully assigned nettest-6829/netserver-3 to bootstrap-e2e-minion-group-n0jl\nnettest-6829                         8m3s        Normal    Pulled                               pod/netserver-3                                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-6829                         8m1s        Normal    Created                              pod/netserver-3                                                                  Created container webserver\nnettest-6829                         7m58s       Normal    Started                              pod/netserver-3                                                                  Started container webserver\nnettest-6829                         7m          Warning   FailedToUpdateEndpoint               endpoints/node-port-service                                                      Failed to update endpoint nettest-6829/node-port-service: Operation cannot be fulfilled on endpoints \"node-port-service\": the object has been modified; please apply your changes to the latest version and try again\nnettest-6829                         7m          Warning   FailedToUpdateEndpoint               endpoints/session-affinity-service                                               Failed to update endpoint nettest-6829/session-affinity-service: Operation cannot be fulfilled on endpoints \"session-affinity-service\": the object has been modified; please apply your changes to the latest version and try again\nnettest-6829                         7m39s       Normal    Scheduled                            pod/test-container-pod                                                           Successfully assigned nettest-6829/test-container-pod to bootstrap-e2e-minion-group-9dh8\nnettest-6829                         7m36s       Normal    Pulled                               pod/test-container-pod                                                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-6829                         7m36s       Normal    Created                              pod/test-container-pod                                                           Created container webserver\nnettest-6829                         7m33s       Normal    Started                              pod/test-container-pod                                                           Started container webserver\npersistent-local-volumes-test-1099   2m28s       Warning   FailedMount                          pod/hostexec-bootstrap-e2e-minion-group-5wcz-zmpxq                               MountVolume.SetUp failed for volume \"default-token-sr9fp\" : failed to sync secret cache: timed out waiting for the condition\npersistent-local-volumes-test-1099   2m26s       Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-5wcz-zmpxq                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-1099   2m26s       Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-zmpxq                               Created container agnhost\npersistent-local-volumes-test-1099   2m26s       Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-zmpxq                               Started container agnhost\npersistent-local-volumes-test-1231   2m26s       Warning   FailedMount                          pod/hostexec-bootstrap-e2e-minion-group-5wcz-8czgs                               MountVolume.SetUp failed for volume \"default-token-qpfm9\" : failed to sync secret cache: timed out waiting for the condition\npersistent-local-volumes-test-1231   2m23s       Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-5wcz-8czgs                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-1231   2m23s       Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-8czgs                               Created container agnhost\npersistent-local-volumes-test-1231   2m21s       Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-8czgs                               Started container agnhost\npersistent-local-volumes-test-1231   2m9s        Normal    Scheduled                            pod/security-context-52d02b1d-d203-4856-9892-9fb58b53d588                        Successfully assigned persistent-local-volumes-test-1231/security-context-52d02b1d-d203-4856-9892-9fb58b53d588 to bootstrap-e2e-minion-group-5wcz\npersistent-local-volumes-test-1231   2m6s        Normal    Pulled                               pod/security-context-52d02b1d-d203-4856-9892-9fb58b53d588                        Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-1231   2m6s        Normal    Created                              pod/security-context-52d02b1d-d203-4856-9892-9fb58b53d588                        Created container write-pod\npersistent-local-volumes-test-1231   2m6s        Normal    Started                              pod/security-context-52d02b1d-d203-4856-9892-9fb58b53d588                        Started container write-pod\npersistent-local-volumes-test-1231   2m2s        Normal    Killing                              pod/security-context-52d02b1d-d203-4856-9892-9fb58b53d588                        Stopping container write-pod\npersistent-local-volumes-test-2922   5m6s        Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-5wcz-smxv2                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-2922   5m5s        Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-smxv2                               Created container agnhost\npersistent-local-volumes-test-2922   5m4s        Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-smxv2                               Started container agnhost\npersistent-local-volumes-test-2922   4m50s       Normal    Scheduled                            pod/security-context-2d63d994-2ee5-4d1c-b6d0-a515debfc284                        Successfully assigned persistent-local-volumes-test-2922/security-context-2d63d994-2ee5-4d1c-b6d0-a515debfc284 to bootstrap-e2e-minion-group-5wcz\npersistent-local-volumes-test-2922   4m47s       Normal    Pulled                               pod/security-context-2d63d994-2ee5-4d1c-b6d0-a515debfc284                        Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-2922   4m47s       Normal    Created                              pod/security-context-2d63d994-2ee5-4d1c-b6d0-a515debfc284                        Created container write-pod\npersistent-local-volumes-test-2922   4m46s       Normal    Started                              pod/security-context-2d63d994-2ee5-4d1c-b6d0-a515debfc284                        Started container write-pod\npersistent-local-volumes-test-2922   4m11s       Normal    Killing                              pod/security-context-2d63d994-2ee5-4d1c-b6d0-a515debfc284                        Stopping container write-pod\npersistent-local-volumes-test-2922   4m31s       Normal    Scheduled                            pod/security-context-ddd20301-5961-4b55-97b3-2596f3245e04                        Successfully assigned persistent-local-volumes-test-2922/security-context-ddd20301-5961-4b55-97b3-2596f3245e04 to bootstrap-e2e-minion-group-5wcz\npersistent-local-volumes-test-2922   4m27s       Normal    Pulled                               pod/security-context-ddd20301-5961-4b55-97b3-2596f3245e04                        Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-2922   4m27s       Normal    Created                              pod/security-context-ddd20301-5961-4b55-97b3-2596f3245e04                        Created container write-pod\npersistent-local-volumes-test-2922   4m26s       Normal    Started                              pod/security-context-ddd20301-5961-4b55-97b3-2596f3245e04                        Started container write-pod\npersistent-local-volumes-test-2922   4m10s       Normal    Killing                              pod/security-context-ddd20301-5961-4b55-97b3-2596f3245e04                        Stopping container write-pod\npersistent-local-volumes-test-4306   4m16s       Warning   FailedMount                          pod/hostexec-bootstrap-e2e-minion-group-5wcz-hdfdg                               MountVolume.SetUp failed for volume \"default-token-2xkzd\" : failed to sync secret cache: timed out waiting for the condition\npersistent-local-volumes-test-4306   4m15s       Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-5wcz-hdfdg                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-4306   4m15s       Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-hdfdg                               Created container agnhost\npersistent-local-volumes-test-4306   4m15s       Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-hdfdg                               Started container agnhost\npersistent-local-volumes-test-4644   109s        Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-5wcz-fxzlr                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-4644   109s        Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-fxzlr                               Created container agnhost\npersistent-local-volumes-test-4644   108s        Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-fxzlr                               Started container agnhost\npersistent-local-volumes-test-4644   84s         Normal    Scheduled                            pod/security-context-25cbde65-b7bc-463b-964b-5a156ef7a357                        Successfully assigned persistent-local-volumes-test-4644/security-context-25cbde65-b7bc-463b-964b-5a156ef7a357 to bootstrap-e2e-minion-group-5wcz\npersistent-local-volumes-test-4644   78s         Normal    Pulled                               pod/security-context-25cbde65-b7bc-463b-964b-5a156ef7a357                        Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-4644   77s         Normal    Created                              pod/security-context-25cbde65-b7bc-463b-964b-5a156ef7a357                        Created container write-pod\npersistent-local-volumes-test-4644   76s         Normal    Started                              pod/security-context-25cbde65-b7bc-463b-964b-5a156ef7a357                        Started container write-pod\npersistent-local-volumes-test-4644   63s         Normal    Killing                              pod/security-context-25cbde65-b7bc-463b-964b-5a156ef7a357                        Stopping container write-pod\npersistent-local-volumes-test-4644   93s         Normal    Scheduled                            pod/security-context-5db7b13a-b5da-4408-aba3-354be1d64584                        Successfully assigned persistent-local-volumes-test-4644/security-context-5db7b13a-b5da-4408-aba3-354be1d64584 to bootstrap-e2e-minion-group-5wcz\npersistent-local-volumes-test-4644   90s         Normal    Pulled                               pod/security-context-5db7b13a-b5da-4408-aba3-354be1d64584                        Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-4644   90s         Normal    Created                              pod/security-context-5db7b13a-b5da-4408-aba3-354be1d64584                        Created container write-pod\npersistent-local-volumes-test-4644   90s         Normal    Started                              pod/security-context-5db7b13a-b5da-4408-aba3-354be1d64584                        Started container write-pod\npersistent-local-volumes-test-4644   64s         Normal    Killing                              pod/security-context-5db7b13a-b5da-4408-aba3-354be1d64584                        Stopping container write-pod\npersistent-local-volumes-test-5270   116s        Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-5wcz-l8bx7                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-5270   116s        Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-l8bx7                               Created container agnhost\npersistent-local-volumes-test-5270   115s        Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-l8bx7                               Started container agnhost\npersistent-local-volumes-test-5270   109s        Normal    Scheduled                            pod/security-context-f5c75f8e-a0bb-48c6-924f-f719d198f889                        Successfully assigned persistent-local-volumes-test-5270/security-context-f5c75f8e-a0bb-48c6-924f-f719d198f889 to bootstrap-e2e-minion-group-5wcz\npersistent-local-volumes-test-5270   107s        Normal    Pulled                               pod/security-context-f5c75f8e-a0bb-48c6-924f-f719d198f889                        Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-5270   107s        Normal    Created                              pod/security-context-f5c75f8e-a0bb-48c6-924f-f719d198f889                        Created container write-pod\npersistent-local-volumes-test-5270   106s        Normal    Started                              pod/security-context-f5c75f8e-a0bb-48c6-924f-f719d198f889                        Started container write-pod\npersistent-local-volumes-test-5270   97s         Normal    Killing                              pod/security-context-f5c75f8e-a0bb-48c6-924f-f719d198f889                        Stopping container write-pod\npersistent-local-volumes-test-5643   5m52s       Warning   FailedMount                          pod/hostexec-bootstrap-e2e-minion-group-5wcz-ndgl6                               MountVolume.SetUp failed for volume \"default-token-pgch7\" : failed to sync secret cache: timed out waiting for the condition\npersistent-local-volumes-test-5643   5m49s       Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-5wcz-ndgl6                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-5643   5m48s       Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-ndgl6                               Created container agnhost\npersistent-local-volumes-test-5643   5m47s       Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-ndgl6                               Started container agnhost\npersistent-local-volumes-test-5643   5m12s       Normal    Scheduled                            pod/security-context-39080148-5958-4c04-b822-258281a86928                        Successfully assigned persistent-local-volumes-test-5643/security-context-39080148-5958-4c04-b822-258281a86928 to bootstrap-e2e-minion-group-5wcz\npersistent-local-volumes-test-5643   5m3s        Normal    Pulled                               pod/security-context-39080148-5958-4c04-b822-258281a86928                        Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-5643   5m3s        Normal    Created                              pod/security-context-39080148-5958-4c04-b822-258281a86928                        Created container write-pod\npersistent-local-volumes-test-5643   5m2s        Normal    Started                              pod/security-context-39080148-5958-4c04-b822-258281a86928                        Started container write-pod\npersistent-local-volumes-test-5643   4m52s       Normal    Killing                              pod/security-context-39080148-5958-4c04-b822-258281a86928                        Stopping container write-pod\npersistent-local-volumes-test-5643   5m37s       Normal    Scheduled                            pod/security-context-c45893c9-2166-4880-9db5-ae08828c7873                        Successfully assigned persistent-local-volumes-test-5643/security-context-c45893c9-2166-4880-9db5-ae08828c7873 to bootstrap-e2e-minion-group-5wcz\npersistent-local-volumes-test-5643   5m34s       Normal    Pulled                               pod/security-context-c45893c9-2166-4880-9db5-ae08828c7873                        Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-5643   5m34s       Normal    Created                              pod/security-context-c45893c9-2166-4880-9db5-ae08828c7873                        Created container write-pod\npersistent-local-volumes-test-5643   5m34s       Normal    Started                              pod/security-context-c45893c9-2166-4880-9db5-ae08828c7873                        Started container write-pod\npersistent-local-volumes-test-5643   5m12s       Normal    Killing                              pod/security-context-c45893c9-2166-4880-9db5-ae08828c7873                        Stopping container write-pod\npersistent-local-volumes-test-7600   2m36s       Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-5wcz-4xxc4                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-7600   2m36s       Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-4xxc4                               Created container agnhost\npersistent-local-volumes-test-7600   2m36s       Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-4xxc4                               Started container agnhost\npersistent-local-volumes-test-776    2m21s       Normal    Pulled                               pod/hostexec-bootstrap-e2e-minion-group-5wcz-lmzlw                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-776    2m20s       Normal    Created                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-lmzlw                               Created container agnhost\npersistent-local-volumes-test-776    2m18s       Normal    Started                              pod/hostexec-bootstrap-e2e-minion-group-5wcz-lmzlw                               Started container agnhost\npersistent-local-volumes-test-776    2m9s        Warning   ProvisioningFailed                   persistentvolumeclaim/pvc-s5b65                                                  no volume plugin matched\npersistent-local-volumes-test-776    2m          Normal    Scheduled                            pod/security-context-dc95c296-7731-4bf3-8cf4-daeba1194f2d                        Successfully assigned persistent-local-volumes-test-776/security-context-dc95c296-7731-4bf3-8cf4-daeba1194f2d to bootstrap-e2e-minion-group-5wcz\npersistent-local-volumes-test-776    118s        Normal    Pulled                               pod/security-context-dc95c296-7731-4bf3-8cf4-daeba1194f2d                        Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-776    118s        Normal    Created                              pod/security-context-dc95c296-7731-4bf3-8cf4-daeba1194f2d                        Created container write-pod\npersistent-local-volumes-test-776    118s        Normal    Started                              pod/security-context-dc95c296-7731-4bf3-8cf4-daeba1194f2d                        Started container write-pod\npersistent-local-volumes-test-776    112s        Normal    Killing                              pod/security-context-dc95c296-7731-4bf3-8cf4-daeba1194f2d                        Stopping container write-pod\npod-network-test-2281                5m20s       Normal    Scheduled                            pod/netserver-0                                                                  Successfully assigned pod-network-test-2281/netserver-0 to bootstrap-e2e-minion-group-5wcz\npod-network-test-2281                5m7s        Normal    Pulled                               pod/netserver-0                                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npod-network-test-2281                5m7s        Normal    Created                              pod/netserver-0                                                                  Created container webserver\npod-network-test-2281                5m4s        Normal    Started                              pod/netserver-0                                                                  Started container webserver\npod-network-test-2281                5m20s       Normal    Scheduled                            pod/netserver-1                                                                  Successfully assigned pod-network-test-2281/netserver-1 to bootstrap-e2e-minion-group-9dh8\npod-network-test-2281                5m7s        Normal    Pulled                               pod/netserver-1                                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npod-network-test-2281                5m7s        Normal    Created                              pod/netserver-1                                                                  Created container webserver\npod-network-test-2281                5m3s        Normal    Started                              pod/netserver-1                                                                  Started container webserver\npod-network-test-2281                5m19s       Normal    Scheduled                            pod/netserver-2                                                                  Successfully assigned pod-network-test-2281/netserver-2 to bootstrap-e2e-minion-group-mnwl\npod-network-test-2281                5m13s       Normal    Pulled                               pod/netserver-2                                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npod-network-test-2281                5m13s       Normal    Created                              pod/netserver-2                                                                  Created container webserver\npod-network-test-2281                5m12s       Normal    Started                              pod/netserver-2                                                                  Started container webserver\npod-network-test-2281                5m19s       Normal    Scheduled                            pod/netserver-3                                                                  Successfully assigned pod-network-test-2281/netserver-3 to bootstrap-e2e-minion-group-n0jl\npod-network-test-2281                5m13s       Normal    Pulled                               pod/netserver-3                                                                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npod-network-test-2281                5m12s       Normal    Created                              pod/netserver-3                                                                  Created container webserver\npod-network-test-2281                5m11s       Normal    Started                              pod/netserver-3                                                                  Started container webserver\npod-network-test-2281                4m31s       Normal    Scheduled                            pod/test-container-pod                                                           Successfully assigned pod-network-test-2281/test-container-pod to bootstrap-e2e-minion-group-n0jl\npod-network-test-2281                4m20s       Normal    Pulled                               pod/test-container-pod                                                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npod-network-test-2281                4m20s       Normal    Created                              pod/test-container-pod                                                           Created container webserver\npod-network-test-2281                4m16s       Normal    Started                              pod/test-container-pod                                                           Started container webserver\npods-4409                            2m35s       Normal    Scheduled                            pod/client-envvars-25c87b42-9604-4dbb-94f0-923cab4cab91                          Successfully assigned pods-4409/client-envvars-25c87b42-9604-4dbb-94f0-923cab4cab91 to bootstrap-e2e-minion-group-n0jl\npods-4409                            2m33s       Normal    Pulled                               pod/client-envvars-25c87b42-9604-4dbb-94f0-923cab4cab91                          Container image \"docker.io/library/busybox:1.29\" already present on machine\npods-4409                            2m33s       Normal    Created                              pod/client-envvars-25c87b42-9604-4dbb-94f0-923cab4cab91                          Created container env3cont\npods-4409                            2m32s       Normal    Started                              pod/client-envvars-25c87b42-9604-4dbb-94f0-923cab4cab91                          Started container env3cont\npods-4409                            2m50s       Normal    Scheduled                            pod/server-envvars-d26dce68-0196-46e2-8967-d10506800372                          Successfully assigned pods-4409/server-envvars-d26dce68-0196-46e2-8967-d10506800372 to bootstrap-e2e-minion-group-9dh8\npods-4409                            2m46s       Normal    Pulled                               pod/server-envvars-d26dce68-0196-46e2-8967-d10506800372                          Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npods-4409                            2m46s       Normal    Created                              pod/server-envvars-d26dce68-0196-46e2-8967-d10506800372                          Created container srv\npods-4409                            2m46s       Normal    Started                              pod/server-envvars-d26dce68-0196-46e2-8967-d10506800372                          Started container srv\nport-forwarding-4477                 31s         Normal    Scheduled                            pod/pfpod                                                                        Successfully assigned port-forwarding-4477/pfpod to bootstrap-e2e-minion-group-9dh8\nport-forwarding-4477                 30s         Warning   FailedMount                          pod/pfpod                                                                        MountVolume.SetUp failed for volume \"default-token-tfn8s\" : failed to sync secret cache: timed out waiting for the condition\nport-forwarding-4477                 27s         Normal    Pulled                               pod/pfpod                                                                        Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nport-forwarding-4477                 27s         Normal    Created                              pod/pfpod                                                                        Created container readiness\nport-forwarding-4477                 26s         Normal    Started                              pod/pfpod                                                                        Started container readiness\nport-forwarding-4477                 25s         Normal    Pulled                               pod/pfpod                                                                        Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nport-forwarding-4477                 25s         Normal    Created                              pod/pfpod                                                                        Created container portforwardtester\nport-forwarding-4477                 25s         Normal    Started                              pod/pfpod                                                                        Started container portforwardtester\nport-forwarding-4477                 5s          Warning   Unhealthy                            pod/pfpod                                                                        Readiness probe failed:\nport-forwarding-5553                 3m29s       Normal    Scheduled                            pod/pfpod                                                                        Successfully assigned port-forwarding-5553/pfpod to bootstrap-e2e-minion-group-9dh8\nport-forwarding-5553                 3m24s       Normal    Pulled                               pod/pfpod                                                                        Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nport-forwarding-5553                 3m24s       Normal    Created                              pod/pfpod                                                                        Created container readiness\nport-forwarding-5553                 3m22s       Normal    Started                              pod/pfpod                                                                        Started container readiness\nport-forwarding-5553                 3m22s       Normal    Pulled                               pod/pfpod                                                                        Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nport-forwarding-5553                 3m22s       Normal    Created                              pod/pfpod                                                                        Created container portforwardtester\nport-forwarding-5553                 3m21s       Normal    Started                              pod/pfpod                                                                        Started container portforwardtester\nport-forwarding-5553                 2m41s       Warning   Unhealthy                            pod/pfpod                                                                        Re