This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-01-15 15:33
Elapsed1h13m
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/df22cd65-9cdb-4c04-aefc-66ad20d34f37/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/df22cd65-9cdb-4c04-aefc-66ad20d34f37/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 612 lines ...
Trying to find master named 'bootstrap-e2e-master'
Looking for address 'bootstrap-e2e-master-ip'
Using master: bootstrap-e2e-master (external IP: 35.197.107.52; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

.............Kubernetes cluster created.
Cluster "k8s-gce-soak-1-5_bootstrap-e2e" set.
User "k8s-gce-soak-1-5_bootstrap-e2e" set.
Context "k8s-gce-soak-1-5_bootstrap-e2e" created.
Switched to context "k8s-gce-soak-1-5_bootstrap-e2e".
... skipping 22 lines ...
bootstrap-e2e-master              Ready,SchedulingDisabled   <none>   25s   v1.18.0-alpha.1.755+05209312b74eac
bootstrap-e2e-minion-group-q10p   Ready                      <none>   21s   v1.18.0-alpha.1.755+05209312b74eac
bootstrap-e2e-minion-group-qkcq   Ready                      <none>   20s   v1.18.0-alpha.1.755+05209312b74eac
bootstrap-e2e-minion-group-qn53   Ready                      <none>   21s   v1.18.0-alpha.1.755+05209312b74eac
bootstrap-e2e-minion-group-vrtv   Ready                      <none>   20s   v1.18.0-alpha.1.755+05209312b74eac
Validate output:
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
Cluster validation succeeded
Done, listing cluster services:
... skipping 78 lines ...

Specify --start=46889 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov.tmp: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts/before'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
... skipping 14 lines ...
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov.tmp: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=bootstrap-e2e-minion-group
NODE_NAMES=bootstrap-e2e-minion-group-q10p bootstrap-e2e-minion-group-qkcq bootstrap-e2e-minion-group-qn53 bootstrap-e2e-minion-group-vrtv
Failures for bootstrap-e2e-minion-group (if any):
2020/01/15 16:14:18 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts/before' finished in 2m11.954228228s
2020/01/15 16:14:18 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh
Project: k8s-gce-soak-1-5
... skipping 279 lines ...
[sig-storage] CSI Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver "csi-hostpath" does not support topology - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:96
------------------------------
... skipping 645 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver gluster doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 149 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 221 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 49 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:14:40.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5037" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:14:41.357: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:14:41.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 69 lines ...
• [SLOW TEST:5.260 seconds]
[sig-api-machinery] Servers with support for Table transformation
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return a 406 for a backend which does not implement metadata [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Zone Support
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 43 lines ...
Jan 15 16:14:42.995: INFO: AfterEach: Cleaning up test resources


S [SKIPPING] in Spec Setup (BeforeEach) [6.106 seconds]
[sig-storage] PersistentVolumes:vsphere
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on vsphere volume detach [BeforeEach]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:147

  Only supported for providers [vsphere] (not gce)

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:63
------------------------------
... skipping 219 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43
    volume on tmpfs should have the correct mode using FSGroup
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:70
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":1,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 63 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run CronJob
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1822
    should create a CronJob
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1835
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run CronJob should create a CronJob","total":-1,"completed":1,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:14:43.344: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:14:43.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 59 lines ...
• [SLOW TEST:6.699 seconds]
[sig-node] RuntimeClass
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:39
  should reject a Pod requesting a RuntimeClass with an unconfigured handler
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:47
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:14:43.564: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:14:43.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 152 lines ...
• [SLOW TEST:8.248 seconds]
[sig-storage] PV Protection
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that PV bound to a PVC is not removed immediately
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:105
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:14:45.079: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:14:45.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 82 lines ...
• [SLOW TEST:13.921 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:14:50.843: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:14:50.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 10 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:14:50.848: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 73 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:290
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:14:52.401: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 92 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:14:54.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5970" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":2,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 28 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43
    volume on default medium should have the correct mode using FSGroup
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:66
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":3,"skipped":16,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:20.846 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:15:02.812: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:15:02.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 180 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541
    should add annotations for pods in rc  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:15:02.887: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 105 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43
    new files should be created with FSGroup ownership when container is non-root
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:54
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":1,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 78 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:15:09.651: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 179 lines ...
• [SLOW TEST:32.332 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:15:03.582: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-2905
... skipping 12 lines ...
• [SLOW TEST:10.946 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:15:14.534: INFO: Only supported for providers [openstack] (not gce)
... skipping 48 lines ...
• [SLOW TEST:12.392 seconds]
[k8s.io] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:13.835 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:15:16.738: INFO: Only supported for providers [vsphere] (not gce)
... skipping 52 lines ...
• [SLOW TEST:15.051 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":4,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:15:17.793: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 87 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":4,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 80 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:15:22.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5557" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":5,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:15:22.410: INFO: Driver azure-disk doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:15:22.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 94 lines ...
• [SLOW TEST:12.699 seconds]
[sig-storage] Projected secret
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:15:23.951: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 77 lines ...
• [SLOW TEST:37.743 seconds]
[sig-autoscaling] DNS horizontal autoscaling
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/framework.go:23
  [DisabledForLargeClusters] kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/dns_autoscaling.go:168
------------------------------
{"msg":"PASSED [sig-autoscaling] DNS horizontal autoscaling [DisabledForLargeClusters] kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios","total":-1,"completed":2,"skipped":17,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:15:28.605: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 46 lines ...
• [SLOW TEST:6.553 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:107
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:15:28.977: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:15:28.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 207 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:15:33.243: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 38 lines ...
• [SLOW TEST:62.396 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:15:39.229: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 173 lines ...
• [SLOW TEST:15.914 seconds]
[sig-storage] HostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should support r/w [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:65
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":3,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:15:39.871: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:15:39.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 104 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":20,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 57 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":1,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 65 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and write from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] [sig-node] AppArmor
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 122 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":3,"skipped":24,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 72 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 64 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should adopt matching orphans and release non-matching pods
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:159
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:15:51.324: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 175 lines ...
• [SLOW TEST:75.095 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2431
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":1,"skipped":4,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:15:51.983: INFO: Driver hostPath doesn't support ntfs -- skipping
... skipping 15 lines ...
      Driver hostPath doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
S
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined","total":-1,"completed":3,"skipped":23,"failed":0}
[BeforeEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:15:46.151: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7889
... skipping 20 lines ...
• [SLOW TEST:6.756 seconds]
[sig-node] Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
... skipping 57 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 59 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:15:53.678: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:15:53.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 173 lines ...
• [SLOW TEST:5.785 seconds]
[sig-storage] Projected configMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:73
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 23 lines ...
• [SLOW TEST:39.896 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Zone Support
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 128 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:419
    should expand volume by restarting pod if attach=on, nodeExpansion=on
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:448
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":2,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:15:57.453: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 117 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":4,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:15:58.007: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 78 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:15:11.642: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 59 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:15:58.176: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
... skipping 139 lines ...
Jan 15 16:15:03.701: INFO: PersistentVolumeClaim pvc-mss5l found but phase is Pending instead of Bound.
Jan 15 16:15:06.048: INFO: PersistentVolumeClaim pvc-mss5l found but phase is Pending instead of Bound.
Jan 15 16:15:08.365: INFO: PersistentVolumeClaim pvc-mss5l found but phase is Pending instead of Bound.
Jan 15 16:15:10.587: INFO: PersistentVolumeClaim pvc-mss5l found but phase is Pending instead of Bound.
Jan 15 16:15:12.808: INFO: PersistentVolumeClaim pvc-mss5l found and phase=Bound (22.03116148s)
STEP: checking for CSIInlineVolumes feature
Jan 15 16:15:34.646: INFO: Error getting logs for pod csi-inline-volume-gltxz: the server rejected our request for an unknown reason (get pods csi-inline-volume-gltxz)
STEP: Deleting pod csi-inline-volume-gltxz in namespace csi-mock-volumes-187
STEP: Deleting the previously created pod
Jan 15 16:15:36.864: INFO: Deleting pod "pvc-volume-tester-t6g8t" in namespace "csi-mock-volumes-187"
Jan 15 16:15:37.067: INFO: Wait up to 5m0s for pod "pvc-volume-tester-t6g8t" to be fully deleted
STEP: Checking CSI driver logs
Jan 15 16:15:52.205: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-187","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-4f1ac8b3-2aef-4e91-83ec-060e978bc6cc","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-4f1ac8b3-2aef-4e91-83ec-060e978bc6cc"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-187","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-187","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-187","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-187","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-4f1ac8b3-2aef-4e91-83ec-060e978bc6cc","storage.kubernetes.io/csiProvisionerIdentity":"1579104911016-8081-csi-mock-csi-mock-volumes-187"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f1ac8b3-2aef-4e91-83ec-060e978bc6cc/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-4f1ac8b3-2aef-4e91-83ec-060e978bc6cc","storage.kubernetes.io/csiProvisionerIdentity":"1579104911016-8081-csi-mock-csi-mock-volumes-187"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f1ac8b3-2aef-4e91-83ec-060e978bc6cc/globalmount","target_path":"/var/lib/kubelet/pods/9cf16d76-ed91-4c88-ae84-2f5117cca2ca/volumes/kubernetes.io~csi/pvc-4f1ac8b3-2aef-4e91-83ec-060e978bc6cc/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"pvc-volume-tester-t6g8t","csi.storage.k8s.io/pod.namespace":"csi-mock-volumes-187","csi.storage.k8s.io/pod.uid":"9cf16d76-ed91-4c88-ae84-2f5117cca2ca","csi.storage.k8s.io/serviceAccount.name":"default","name":"pvc-4f1ac8b3-2aef-4e91-83ec-060e978bc6cc","storage.kubernetes.io/csiProvisionerIdentity":"1579104911016-8081-csi-mock-csi-mock-volumes-187"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/9cf16d76-ed91-4c88-ae84-2f5117cca2ca/volumes/kubernetes.io~csi/pvc-4f1ac8b3-2aef-4e91-83ec-060e978bc6cc/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f1ac8b3-2aef-4e91-83ec-060e978bc6cc/globalmount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerUnpublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-187"},"Response":{},"Error":""}

Jan 15 16:15:52.206: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Jan 15 16:15:52.206: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-t6g8t
Jan 15 16:15:52.206: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-187
Jan 15 16:15:52.206: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 9cf16d76-ed91-4c88-ae84-2f5117cca2ca
Jan 15 16:15:52.206: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
... skipping 57 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:63
[It] should support sysctls
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:67
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:11.010 seconds]
[k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support sysctls
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:67
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls","total":-1,"completed":5,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:03.927: INFO: Only supported for providers [azure] (not gce)
... skipping 32 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: nfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver "nfs" does not support topology - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:96
------------------------------
... skipping 56 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":7,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:06.443: INFO: Distro gci doesn't support ntfs -- skipping
... skipping 124 lines ...
• [SLOW TEST:17.143 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:09.137: INFO: Driver local doesn't support ext4 -- skipping
... skipping 30 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 98 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 96 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:12.469: INFO: Only supported for providers [vsphere] (not gce)
... skipping 6 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Only supported for providers [vsphere] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1383
------------------------------
... skipping 111 lines ...
STEP: cleaning the environment after gcepd
Jan 15 16:15:56.273: INFO: Deleting pod "gcepd-client" in namespace "volume-7834"
Jan 15 16:15:56.434: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
STEP: Deleting pv and pvc
Jan 15 16:16:02.737: INFO: Deleting PersistentVolumeClaim "pvc-42fr5"
Jan 15 16:16:03.088: INFO: Deleting PersistentVolume "gcepd-kxjk5"
Jan 15 16:16:04.678: INFO: error deleting PD "bootstrap-e2e-d053fe5a-94a7-43fb-9247-0af695602520": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-d053fe5a-94a7-43fb-9247-0af695602520' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-qkcq', resourceInUseByAnotherResource
Jan 15 16:16:04.678: INFO: Couldn't delete PD "bootstrap-e2e-d053fe5a-94a7-43fb-9247-0af695602520", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-d053fe5a-94a7-43fb-9247-0af695602520' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-qkcq', resourceInUseByAnotherResource
Jan 15 16:16:12.277: INFO: Successfully deleted PD "bootstrap-e2e-d053fe5a-94a7-43fb-9247-0af695602520".
Jan 15 16:16:12.277: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:16:12.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7834" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":3,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:12.704: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 51 lines ...
• [SLOW TEST:15.875 seconds]
[sig-storage] Projected secret
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:13.907: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:16:13.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 167 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: inline ephemeral CSI volume] ephemeral
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support multiple inline ephemeral volumes
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:177
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:14.920: INFO: Distro gci doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:16:14.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 21 lines ...
Jan 15 16:16:12.481: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename volume-provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-provisioning-2912
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:143
[It] should report an error and create no PV
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:776
Jan 15 16:16:14.704: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [sig-storage] Dynamic Provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:16:14.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-provisioning-2912" for this suite.


S [SKIPPING] [2.542 seconds]
[sig-storage] Dynamic Provisioning
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:775
    should report an error and create no PV [It]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:776

    Only supported for providers [aws] (not gce)

    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:777
------------------------------
... skipping 89 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:16:16.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2223" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:17.895: INFO: Distro gci doesn't support ntfs -- skipping
... skipping 76 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: nfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver "nfs" does not support topology - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:96
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":2,"skipped":15,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:16:02.691: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9350
... skipping 25 lines ...
• [SLOW TEST:15.661 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:90
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":15,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:18.358: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 62 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:16:19.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4909" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":3,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:19.718: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:16:19.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 57 lines ...
• [SLOW TEST:13.088 seconds]
[k8s.io] InitContainer [NodeConformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":3,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 93 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should create and stop a replication controller  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":3,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:23.225: INFO: Driver local doesn't support ntfs -- skipping
... skipping 61 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":-1,"completed":6,"skipped":31,"failed":0}
[BeforeEach] [sig-apps] DisruptionController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:16:06.692: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename disruption
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-1462
... skipping 20 lines ...
• [SLOW TEST:19.142 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update PodDisruptionBudget status
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:63
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should update PodDisruptionBudget status","total":-1,"completed":7,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:25.839: INFO: Driver cinder doesn't support ntfs -- skipping
... skipping 178 lines ...
Jan 15 16:15:54.781: INFO: stdout: "NAMESPACE                            NAME             STATUS        VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                                                        AGE\ncsi-mock-volumes-948                 pvc-s9csn        Bound         pvc-383be570-15ca-4a20-b476-e1e1effeb0c0   1Gi        RWO            csi-mock-volumes-948-sc                                             65s\nkubectl-7630                         pvc1g5bncrtz7t   Pending                                                                            standard                                                            0s\npersistent-local-volumes-test-158    pvc-d6q5d        Pending                                                                            local-volume-test-storageclass-persistent-local-volumes-test-158    2s\npersistent-local-volumes-test-4687   pvc-rnlvj        Terminating   local-pv4zsfb                              2Gi        RWO            local-volume-test-storageclass-persistent-local-volumes-test-4687   41s\npersistent-local-volumes-test-8451   pvc-w2kfb        Terminating   local-pvzntgg                              2Gi        RWO            local-volume-test-storageclass-persistent-local-volumes-test-8451   40s\nprovisioning-2262                    pvc-7lsql        Bound         local-kwq9k                                2Gi        RWO            provisioning-2262                                                   22s\nprovisioning-4978                    pvc-fs6ml        Bound         local-pmbr6                                2Gi        RWO            provisioning-4978                                                   33s\nprovisioning-8413                    pvc-7z74k        Pending                                                                            provisioning-8413                                                   12s\nprovisioning-990                     pvc-h6d94        Bound         pvc-20a925ae-ee6d-444e-92c2-27a20a1a8194   5Gi        RWO            provisioning-990-gcepd-sc92bb6                                      32s\nvolume-3891                          pvc-wzdbt        Pending                                                                            volume-3891                                                         11s\nvolume-4652                          pvc-sn694        Bound         gcepd-fzctn                                2Gi        RWO            volume-4652                                                         51s\nvolume-5786                          pvc-pqlsx        Bound         gcepd-qvrsx                                2Gi        RWO            volume-5786                                                         22s\nvolume-7834                          pvc-42fr5        Bound         gcepd-kxjk5                                2Gi        RWO            volume-7834                                                         67s\n"
Jan 15 16:15:54.920: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.107.52 --kubeconfig=/workspace/.kube/config get persistentvolumes --all-namespaces'
Jan 15 16:15:55.160: INFO: stderr: ""
Jan 15 16:15:55.160: INFO: stdout: "NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                                          STORAGECLASS                                                        REASON   AGE\ngcepd-fzctn                                2Gi        RWO            Retain           Bound         volume-4652/pvc-sn694                          volume-4652                                                                  51s\ngcepd-kxjk5                                2Gi        RWO            Retain           Bound         volume-7834/pvc-42fr5                          volume-7834                                                                  66s\ngcepd-qvrsx                                2Gi        RWO            Retain           Bound         volume-5786/pvc-pqlsx                          volume-5786                                                                  23s\nlocal-9bzvh                                2Gi        RWO            Retain           Available                                                    provisioning-8413                                                            13s\nlocal-kwq9k                                2Gi        RWO            Retain           Bound         provisioning-2262/pvc-7lsql                    provisioning-2262                                                            23s\nlocal-pmbr6                                2Gi        RWO            Retain           Bound         provisioning-4978/pvc-fs6ml                    provisioning-4978                                                            34s\nlocal-pv22h85                              2Gi        RWO            Retain           Available                                                    local-volume-test-storageclass-persistent-local-volumes-test-158             3s\nlocal-pvzntgg                              2Gi        RWO            Retain           Terminating   persistent-local-volumes-test-8451/pvc-w2kfb   local-volume-test-storageclass-persistent-local-volumes-test-8451            41s\nnfs-p5q9d                                  2Gi        RWO            Retain           Available                                                    volume-3891                                                                  12s\npv1nameg5bncrtz7t                          3M         RWO            Retain           Available                                                                                                                                 1s\npvc-20a925ae-ee6d-444e-92c2-27a20a1a8194   5Gi        RWO            Delete           Bound         provisioning-990/pvc-h6d94                     provisioning-990-gcepd-sc92bb6                                               30s\npvc-383be570-15ca-4a20-b476-e1e1effeb0c0   6Gi        RWO            Delete           Bound         csi-mock-volumes-948/pvc-s9csn                 csi-mock-volumes-948-sc                                                      45s\n"
Jan 15 16:15:55.345: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.107.52 --kubeconfig=/workspace/.kube/config get events --all-namespaces'
Jan 15 16:15:55.951: INFO: stderr: ""
Jan 15 16:15:55.951: INFO: stdout: "NAMESPACE                            LAST SEEN   TYPE      REASON                       OBJECT                                                              MESSAGE\napparmor-1629                        29s         Normal    Pulled                       pod/apparmor-loader-r5w6q                                           Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/apparmor-loader:1.0\"\napparmor-1629                        29s         Normal    Created                      pod/apparmor-loader-r5w6q                                           Created container apparmor-loader\napparmor-1629                        28s         Normal    Started                      pod/apparmor-loader-r5w6q                                           Started container apparmor-loader\napparmor-1629                        39s         Normal    SuccessfulCreate             replicationcontroller/apparmor-loader                               Created pod: apparmor-loader-r5w6q\napparmor-1629                        18s         Normal    Scheduled                    pod/test-apparmor-fp7js                                             Successfully assigned apparmor-1629/test-apparmor-fp7js to bootstrap-e2e-minion-group-vrtv\napparmor-1629                        15s         Normal    Pulled                       pod/test-apparmor-fp7js                                             Container image \"docker.io/library/busybox:1.29\" already present on machine\napparmor-1629                        15s         Normal    Created                      pod/test-apparmor-fp7js                                             Created container test\napparmor-1629                        13s         Normal    Started                      pod/test-apparmor-fp7js                                             Started container test\ncontainer-probe-8849                 34s         Normal    Scheduled                    pod/test-webserver-bccfe720-9fca-4150-8386-32304c5297d7             Successfully assigned container-probe-8849/test-webserver-bccfe720-9fca-4150-8386-32304c5297d7 to bootstrap-e2e-minion-group-qn53\ncontainer-probe-8849                 31s         Normal    Pulling                      pod/test-webserver-bccfe720-9fca-4150-8386-32304c5297d7             Pulling image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\"\ncontainer-probe-8849                 29s         Normal    Pulled                       pod/test-webserver-bccfe720-9fca-4150-8386-32304c5297d7             Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\"\ncontainer-probe-8849                 29s         Normal    Created                      pod/test-webserver-bccfe720-9fca-4150-8386-32304c5297d7             Created container test-webserver\ncontainer-probe-8849                 29s         Normal    Started                      pod/test-webserver-bccfe720-9fca-4150-8386-32304c5297d7             Started container test-webserver\ncsi-mock-volumes-187                 21s         Normal    Scheduled                    pod/csi-inline-volume-gltxz                                         Successfully assigned csi-mock-volumes-187/csi-inline-volume-gltxz to bootstrap-e2e-minion-group-qn53\ncsi-mock-volumes-187                 62s         Normal    Pulling                      pod/csi-mockplugin-0                                                Pulling image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\ncsi-mock-volumes-187                 54s         Normal    Pulled                       pod/csi-mockplugin-0                                                Successfully pulled image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\ncsi-mock-volumes-187                 54s         Normal    Created                      pod/csi-mockplugin-0                                                Created container csi-provisioner\ncsi-mock-volumes-187                 54s         Normal    Started                      pod/csi-mockplugin-0                                                Started container csi-provisioner\ncsi-mock-volumes-187                 54s         Normal    Pulling                      pod/csi-mockplugin-0                                                Pulling image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\ncsi-mock-volumes-187                 50s         Normal    Pulled                       pod/csi-mockplugin-0                                                Successfully pulled image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\ncsi-mock-volumes-187                 49s         Normal    Created                      pod/csi-mockplugin-0                                                Created container driver-registrar\ncsi-mock-volumes-187                 48s         Normal    Started                      pod/csi-mockplugin-0                                                Started container driver-registrar\ncsi-mock-volumes-187                 48s         Normal    Pulling                      pod/csi-mockplugin-0                                                Pulling image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-187                 45s         Normal    Pulled                       pod/csi-mockplugin-0                                                Successfully pulled image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-187                 45s         Normal    Created                      pod/csi-mockplugin-0                                                Created container mock\ncsi-mock-volumes-187                 45s         Normal    Started                      pod/csi-mockplugin-0                                                Started container mock\ncsi-mock-volumes-187                 62s         Normal    Pulling                      pod/csi-mockplugin-attacher-0                                       Pulling image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\ncsi-mock-volumes-187                 54s         Normal    Pulled                       pod/csi-mockplugin-attacher-0                                       Successfully pulled image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\ncsi-mock-volumes-187                 54s         Normal    Created                      pod/csi-mockplugin-attacher-0                                       Created container csi-attacher\ncsi-mock-volumes-187                 53s         Normal    Started                      pod/csi-mockplugin-attacher-0                                       Started container csi-attacher\ncsi-mock-volumes-187                 65s         Normal    SuccessfulCreate             statefulset/csi-mockplugin-attacher                                 create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-187                 65s         Normal    SuccessfulCreate             statefulset/csi-mockplugin                                          create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-187                 58s         Normal    ExternalProvisioning         persistentvolumeclaim/pvc-mss5l                                     waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-187\" or manually created by system administrator\ncsi-mock-volumes-187                 44s         Normal    Provisioning                 persistentvolumeclaim/pvc-mss5l                                     External provisioner is provisioning volume for claim \"csi-mock-volumes-187/pvc-mss5l\"\ncsi-mock-volumes-187                 44s         Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-mss5l                                     Successfully provisioned volume pvc-4f1ac8b3-2aef-4e91-83ec-060e978bc6cc\ncsi-mock-volumes-187                 41s         Normal    SuccessfulAttachVolume       pod/pvc-volume-tester-t6g8t                                         AttachVolume.Attach succeeded for volume \"pvc-4f1ac8b3-2aef-4e91-83ec-060e978bc6cc\"\ncsi-mock-volumes-187                 23s         Normal    Pulled                       pod/pvc-volume-tester-t6g8t                                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-187                 23s         Normal    Created                      pod/pvc-volume-tester-t6g8t                                         Created container volume-tester\ncsi-mock-volumes-187                 22s         Normal    Started                      pod/pvc-volume-tester-t6g8t                                         Started container volume-tester\ncsi-mock-volumes-187                 18s         Normal    Killing                      pod/pvc-volume-tester-t6g8t                                         Stopping container volume-tester\ncsi-mock-volumes-4687                60s         Normal    Pulling                      pod/csi-mockplugin-0                                                Pulling image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\ncsi-mock-volumes-4687                57s         Normal    Pulled                       pod/csi-mockplugin-0                                                Successfully pulled image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\ncsi-mock-volumes-4687                55s         Normal    Created                      pod/csi-mockplugin-0                                                Created container csi-provisioner\ncsi-mock-volumes-4687                54s         Normal    Started                      pod/csi-mockplugin-0                                                Started container csi-provisioner\ncsi-mock-volumes-4687                54s         Normal    Pulling                      pod/csi-mockplugin-0                                                Pulling image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\ncsi-mock-volumes-4687                52s         Normal    Pulled                       pod/csi-mockplugin-0                                                Successfully pulled image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\ncsi-mock-volumes-4687                51s         Normal    Created                      pod/csi-mockplugin-0                                                Created container driver-registrar\ncsi-mock-volumes-4687                51s         Normal    Started                      pod/csi-mockplugin-0                                                Started container driver-registrar\ncsi-mock-volumes-4687                51s         Normal    Pulling                      pod/csi-mockplugin-0                                                Pulling image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-4687                48s         Normal    Pulled                       pod/csi-mockplugin-0                                                Successfully pulled image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-4687                47s         Normal    Created                      pod/csi-mockplugin-0                                                Created container mock\ncsi-mock-volumes-4687                47s         Normal    Started                      pod/csi-mockplugin-0                                                Started container mock\ncsi-mock-volumes-4687                60s         Normal    Pulling                      pod/csi-mockplugin-attacher-0                                       Pulling image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\ncsi-mock-volumes-4687                55s         Normal    Pulled                       pod/csi-mockplugin-attacher-0                                       Successfully pulled image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\ncsi-mock-volumes-4687                55s         Normal    Created                      pod/csi-mockplugin-attacher-0                                       Created container csi-attacher\ncsi-mock-volumes-4687                54s         Normal    Started                      pod/csi-mockplugin-attacher-0                                       Started container csi-attacher\ncsi-mock-volumes-4687                64s         Normal    SuccessfulCreate             statefulset/csi-mockplugin-attacher                                 create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-4687                60s         Normal    Pulling                      pod/csi-mockplugin-resizer-0                                        Pulling image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\ncsi-mock-volumes-4687                57s         Normal    Pulled                       pod/csi-mockplugin-resizer-0                                        Successfully pulled image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\ncsi-mock-volumes-4687                55s         Normal    Created                      pod/csi-mockplugin-resizer-0                                        Created container csi-resizer\ncsi-mock-volumes-4687                54s         Normal    Started                      pod/csi-mockplugin-resizer-0                                        Started container csi-resizer\ncsi-mock-volumes-4687                64s         Normal    SuccessfulCreate             statefulset/csi-mockplugin-resizer                                  create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\ncsi-mock-volumes-4687                64s         Normal    SuccessfulCreate             statefulset/csi-mockplugin                                          create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-4687                58s         Normal    ExternalProvisioning         persistentvolumeclaim/pvc-5q9sx                                     waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-4687\" or manually created by system administrator\ncsi-mock-volumes-4687                45s         Normal    Provisioning                 persistentvolumeclaim/pvc-5q9sx                                     External provisioner is provisioning volume for claim \"csi-mock-volumes-4687/pvc-5q9sx\"\ncsi-mock-volumes-4687                45s         Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-5q9sx                                     Successfully provisioned volume pvc-3884a223-847f-44bc-b1a8-ed8e224d61ef\ncsi-mock-volumes-4687                35s         Warning   ExternalExpanding            persistentvolumeclaim/pvc-5q9sx                                     Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\ncsi-mock-volumes-4687                35s         Normal    Resizing                     persistentvolumeclaim/pvc-5q9sx                                     External resizer is resizing volume pvc-3884a223-847f-44bc-b1a8-ed8e224d61ef\ncsi-mock-volumes-4687                34s         Normal    FileSystemResizeRequired     persistentvolumeclaim/pvc-5q9sx                                     Require file system resize of volume on node\ncsi-mock-volumes-4687                19s         Normal    FileSystemResizeSuccessful   persistentvolumeclaim/pvc-5q9sx                                     MountVolume.NodeExpandVolume succeeded for volume \"pvc-3884a223-847f-44bc-b1a8-ed8e224d61ef\"\ncsi-mock-volumes-4687                43s         Normal    SuccessfulAttachVolume       pod/pvc-volume-tester-h67hb                                         AttachVolume.Attach succeeded for volume \"pvc-3884a223-847f-44bc-b1a8-ed8e224d61ef\"\ncsi-mock-volumes-4687                39s         Normal    Pulled                       pod/pvc-volume-tester-h67hb                                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-4687                39s         Normal    Created                      pod/pvc-volume-tester-h67hb                                         Created container volume-tester\ncsi-mock-volumes-4687                38s         Normal    Started                      pod/pvc-volume-tester-h67hb                                         Started container volume-tester\ncsi-mock-volumes-4687                32s         Normal    Killing                      pod/pvc-volume-tester-h67hb                                         Stopping container volume-tester\ncsi-mock-volumes-4687                19s         Normal    FileSystemResizeSuccessful   pod/pvc-volume-tester-v64xj                                         MountVolume.NodeExpandVolume succeeded for volume \"pvc-3884a223-847f-44bc-b1a8-ed8e224d61ef\"\ncsi-mock-volumes-4687                17s         Normal    Pulled                       pod/pvc-volume-tester-v64xj                                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-4687                17s         Normal    Created                      pod/pvc-volume-tester-v64xj                                         Created container volume-tester\ncsi-mock-volumes-4687                16s         Normal    Started                      pod/pvc-volume-tester-v64xj                                         Started container volume-tester\ncsi-mock-volumes-948                 63s         Normal    Pulling                      pod/csi-mockplugin-0                                                Pulling image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\ncsi-mock-volumes-948                 57s         Normal    Pulled                       pod/csi-mockplugin-0                                                Successfully pulled image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\ncsi-mock-volumes-948                 55s         Normal    Created                      pod/csi-mockplugin-0                                                Created container csi-provisioner\ncsi-mock-volumes-948                 54s         Normal    Started                      pod/csi-mockplugin-0                                                Started container csi-provisioner\ncsi-mock-volumes-948                 54s         Normal    Pulling                      pod/csi-mockplugin-0                                                Pulling image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\ncsi-mock-volumes-948                 52s         Normal    Pulled                       pod/csi-mockplugin-0                                                Successfully pulled image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\ncsi-mock-volumes-948                 51s         Normal    Created                      pod/csi-mockplugin-0                                                Created container driver-registrar\ncsi-mock-volumes-948                 51s         Normal    Started                      pod/csi-mockplugin-0                                                Started container driver-registrar\ncsi-mock-volumes-948                 51s         Normal    Pulling                      pod/csi-mockplugin-0                                                Pulling image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-948                 48s         Normal    Pulled                       pod/csi-mockplugin-0                                                Successfully pulled image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-948                 47s         Normal    Created                      pod/csi-mockplugin-0                                                Created container mock\ncsi-mock-volumes-948                 47s         Normal    Started                      pod/csi-mockplugin-0                                                Started container mock\ncsi-mock-volumes-948                 63s         Normal    Pulling                      pod/csi-mockplugin-resizer-0                                        Pulling image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\ncsi-mock-volumes-948                 57s         Normal    Pulled                       pod/csi-mockplugin-resizer-0                                        Successfully pulled image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\ncsi-mock-volumes-948                 55s         Normal    Created                      pod/csi-mockplugin-resizer-0                                        Created container csi-resizer\ncsi-mock-volumes-948                 54s         Normal    Started                      pod/csi-mockplugin-resizer-0                                        Started container csi-resizer\ncsi-mock-volumes-948                 66s         Normal    SuccessfulCreate             statefulset/csi-mockplugin-resizer                                  create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\ncsi-mock-volumes-948                 66s         Normal    SuccessfulCreate             statefulset/csi-mockplugin                                          create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-948                 57s         Normal    ExternalProvisioning         persistentvolumeclaim/pvc-s9csn                                     waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-948\" or manually created by system administrator\ncsi-mock-volumes-948                 45s         Normal    Provisioning                 persistentvolumeclaim/pvc-s9csn                                     External provisioner is provisioning volume for claim \"csi-mock-volumes-948/pvc-s9csn\"\ncsi-mock-volumes-948                 45s         Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-s9csn                                     Successfully provisioned volume pvc-383be570-15ca-4a20-b476-e1e1effeb0c0\ncsi-mock-volumes-948                 37s         Warning   ExternalExpanding            persistentvolumeclaim/pvc-s9csn                                     Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\ncsi-mock-volumes-948                 37s         Normal    Resizing                     persistentvolumeclaim/pvc-s9csn                                     External resizer is resizing volume pvc-383be570-15ca-4a20-b476-e1e1effeb0c0\ncsi-mock-volumes-948                 36s         Normal    FileSystemResizeRequired     persistentvolumeclaim/pvc-s9csn                                     Require file system resize of volume on node\ncsi-mock-volumes-948                 40s         Normal    Pulled                       pod/pvc-volume-tester-w6rhc                                         Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-948                 40s         Normal    Created                      pod/pvc-volume-tester-w6rhc                                         Created container volume-tester\ncsi-mock-volumes-948                 39s         Normal    Started                      pod/pvc-volume-tester-w6rhc                                         Started container volume-tester\ndefault                              4m29s       Normal    RegisteredNode               node/bootstrap-e2e-master                                           Node bootstrap-e2e-master event: Registered Node bootstrap-e2e-master in Controller\ndefault                              4m26s       Normal    Starting                     node/bootstrap-e2e-minion-group-q10p                                Starting kubelet.\ndefault                              4m26s       Normal    NodeHasSufficientMemory      node/bootstrap-e2e-minion-group-q10p                                Node bootstrap-e2e-minion-group-q10p status is now: NodeHasSufficientMemory\ndefault                              4m26s       Normal    NodeHasNoDiskPressure        node/bootstrap-e2e-minion-group-q10p                                Node bootstrap-e2e-minion-group-q10p status is now: NodeHasNoDiskPressure\ndefault                              4m26s       Normal    NodeHasSufficientPID         node/bootstrap-e2e-minion-group-q10p                                Node bootstrap-e2e-minion-group-q10p status is now: NodeHasSufficientPID\ndefault                              4m26s       Normal    NodeAllocatableEnforced      node/bootstrap-e2e-minion-group-q10p                                Updated Node Allocatable limit across pods\ndefault                              4m25s       Normal    NodeReady                    node/bootstrap-e2e-minion-group-q10p                                Node bootstrap-e2e-minion-group-q10p status is now: NodeReady\ndefault                              4m24s       Normal    Starting                     node/bootstrap-e2e-minion-group-q10p                                Starting kube-proxy.\ndefault                              4m24s       Normal    RegisteredNode               node/bootstrap-e2e-minion-group-q10p                                Node bootstrap-e2e-minion-group-q10p event: Registered Node bootstrap-e2e-minion-group-q10p in Controller\ndefault                              4m21s       Warning   ContainerdStart              node/bootstrap-e2e-minion-group-q10p                                Starting containerd container runtime...\ndefault                              4m21s       Warning   DockerStart                  node/bootstrap-e2e-minion-group-q10p                                Starting Docker Application Container Engine...\ndefault                              4m21s       Warning   KubeletStart                 node/bootstrap-e2e-minion-group-q10p                                Started Kubernetes kubelet.\ndefault                              4m26s       Normal    Starting                     node/bootstrap-e2e-minion-group-qkcq                                Starting kubelet.\ndefault                              4m25s       Normal    NodeHasSufficientMemory      node/bootstrap-e2e-minion-group-qkcq                                Node bootstrap-e2e-minion-group-qkcq status is now: NodeHasSufficientMemory\ndefault                              4m25s       Normal    NodeHasNoDiskPressure        node/bootstrap-e2e-minion-group-qkcq                                Node bootstrap-e2e-minion-group-qkcq status is now: NodeHasNoDiskPressure\ndefault                              4m25s       Normal    NodeHasSufficientPID         node/bootstrap-e2e-minion-group-qkcq                                Node bootstrap-e2e-minion-group-qkcq status is now: NodeHasSufficientPID\ndefault                              4m25s       Normal    NodeAllocatableEnforced      node/bootstrap-e2e-minion-group-qkcq                                Updated Node Allocatable limit across pods\ndefault                              4m25s       Normal    NodeReady                    node/bootstrap-e2e-minion-group-qkcq                                Node bootstrap-e2e-minion-group-qkcq status is now: NodeReady\ndefault                              4m24s       Normal    RegisteredNode               node/bootstrap-e2e-minion-group-qkcq                                Node bootstrap-e2e-minion-group-qkcq event: Registered Node bootstrap-e2e-minion-group-qkcq in Controller\ndefault                              4m23s       Normal    Starting                     node/bootstrap-e2e-minion-group-qkcq                                Starting kube-proxy.\ndefault                              4m20s       Warning   ContainerdStart              node/bootstrap-e2e-minion-group-qkcq                                Starting containerd container runtime...\ndefault                              4m20s       Warning   DockerStart                  node/bootstrap-e2e-minion-group-qkcq                                Starting Docker Application Container Engine...\ndefault                              4m20s       Warning   KubeletStart                 node/bootstrap-e2e-minion-group-qkcq                                Started Kubernetes kubelet.\ndefault                              4m27s       Normal    Starting                     node/bootstrap-e2e-minion-group-qn53                                Starting kubelet.\ndefault                              4m26s       Normal    NodeHasSufficientMemory      node/bootstrap-e2e-minion-group-qn53                                Node bootstrap-e2e-minion-group-qn53 status is now: NodeHasSufficientMemory\ndefault                              4m26s       Normal    NodeHasNoDiskPressure        node/bootstrap-e2e-minion-group-qn53                                Node bootstrap-e2e-minion-group-qn53 status is now: NodeHasNoDiskPressure\ndefault                              4m26s       Normal    NodeHasSufficientPID         node/bootstrap-e2e-minion-group-qn53                                Node bootstrap-e2e-minion-group-qn53 status is now: NodeHasSufficientPID\ndefault                              4m26s       Normal    NodeAllocatableEnforced      node/bootstrap-e2e-minion-group-qn53                                Updated Node Allocatable limit across pods\ndefault                              4m26s       Normal    NodeReady                    node/bootstrap-e2e-minion-group-qn53                                Node bootstrap-e2e-minion-group-qn53 status is now: NodeReady\ndefault                              4m24s       Normal    Starting                     node/bootstrap-e2e-minion-group-qn53                                Starting kube-proxy.\ndefault                              4m24s       Normal    RegisteredNode               node/bootstrap-e2e-minion-group-qn53                                Node bootstrap-e2e-minion-group-qn53 event: Registered Node bootstrap-e2e-minion-group-qn53 in Controller\ndefault                              4m22s       Warning   ContainerdStart              node/bootstrap-e2e-minion-group-qn53                                Starting containerd container runtime...\ndefault                              4m21s       Warning   DockerStart                  node/bootstrap-e2e-minion-group-qn53                                Starting Docker Application Container Engine...\ndefault                              4m21s       Warning   KubeletStart                 node/bootstrap-e2e-minion-group-qn53                                Started Kubernetes kubelet.\ndefault                              4m25s       Normal    Starting                     node/bootstrap-e2e-minion-group-vrtv                                Starting kubelet.\ndefault                              4m25s       Normal    NodeHasSufficientMemory      node/bootstrap-e2e-minion-group-vrtv                                Node bootstrap-e2e-minion-group-vrtv status is now: NodeHasSufficientMemory\ndefault                              4m25s       Normal    NodeHasNoDiskPressure        node/bootstrap-e2e-minion-group-vrtv                                Node bootstrap-e2e-minion-group-vrtv status is now: NodeHasNoDiskPressure\ndefault                              4m25s       Normal    NodeHasSufficientPID         node/bootstrap-e2e-minion-group-vrtv                                Node bootstrap-e2e-minion-group-vrtv status is now: NodeHasSufficientPID\ndefault                              4m25s       Normal    NodeAllocatableEnforced      node/bootstrap-e2e-minion-group-vrtv                                Updated Node Allocatable limit across pods\ndefault                              4m24s       Normal    NodeReady                    node/bootstrap-e2e-minion-group-vrtv                                Node bootstrap-e2e-minion-group-vrtv status is now: NodeReady\ndefault                              4m24s       Normal    RegisteredNode               node/bootstrap-e2e-minion-group-vrtv                                Node bootstrap-e2e-minion-group-vrtv event: Registered Node bootstrap-e2e-minion-group-vrtv in Controller\ndefault                              4m22s       Normal    Starting                     node/bootstrap-e2e-minion-group-vrtv                                Starting kube-proxy.\ndefault                              4m21s       Warning   ContainerdStart              node/bootstrap-e2e-minion-group-vrtv                                Starting containerd container runtime...\ndefault                              4m21s       Warning   DockerStart                  node/bootstrap-e2e-minion-group-vrtv                                Starting Docker Application Container Engine...\ndefault                              4m21s       Warning   KubeletStart                 node/bootstrap-e2e-minion-group-vrtv                                Started Kubernetes kubelet.\ndns-6389                             38s         Normal    Scheduled                    pod/dns-test-78eac08d-ba36-44fa-97c1-21610f9a3f62                   Successfully assigned dns-6389/dns-test-78eac08d-ba36-44fa-97c1-21610f9a3f62 to bootstrap-e2e-minion-group-vrtv\ndns-6389                             32s         Normal    Pulled                       pod/dns-test-78eac08d-ba36-44fa-97c1-21610f9a3f62                   Container image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\" already present on machine\ndns-6389                             32s         Normal    Created                      pod/dns-test-78eac08d-ba36-44fa-97c1-21610f9a3f62                   Created container webserver\ndns-6389                             32s         Normal    Started                      pod/dns-test-78eac08d-ba36-44fa-97c1-21610f9a3f62                   Started container webserver\ndns-6389                             32s         Normal    Pulling                      pod/dns-test-78eac08d-ba36-44fa-97c1-21610f9a3f62                   Pulling image \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\"\ndns-6389                             29s         Normal    Pulled                       pod/dns-test-78eac08d-ba36-44fa-97c1-21610f9a3f62                   Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\"\ndns-6389                             29s         Normal    Created                      pod/dns-test-78eac08d-ba36-44fa-97c1-21610f9a3f62                   Created container querier\ndns-6389                             28s         Normal    Started                      pod/dns-test-78eac08d-ba36-44fa-97c1-21610f9a3f62                   Started container querier\ndns-6389                             28s         Normal    Pulling                      pod/dns-test-78eac08d-ba36-44fa-97c1-21610f9a3f62                   Pulling image \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\"\ndns-6389                             4s          Normal    Pulled                       pod/dns-test-78eac08d-ba36-44fa-97c1-21610f9a3f62                   Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\"\ndns-6389                             4s          Normal    Created                      pod/dns-test-78eac08d-ba36-44fa-97c1-21610f9a3f62                   Created container jessie-querier\ndns-6389                             4s          Normal    Started                      pod/dns-test-78eac08d-ba36-44fa-97c1-21610f9a3f62                   Started container jessie-querier\ndns-6389                             1s          Normal    Killing                      pod/dns-test-78eac08d-ba36-44fa-97c1-21610f9a3f62                   Stopping container webserver\ndns-6389                             1s          Normal    Killing                      pod/dns-test-78eac08d-ba36-44fa-97c1-21610f9a3f62                   Stopping container jessie-querier\ndns-6389                             1s          Normal    Killing                      pod/dns-test-78eac08d-ba36-44fa-97c1-21610f9a3f62                   Stopping container querier\ndownward-api-7889                    9s          Normal    Scheduled                    pod/downward-api-69274024-01da-41d0-9b9c-30426cf27f57               Successfully assigned downward-api-7889/downward-api-69274024-01da-41d0-9b9c-30426cf27f57 to bootstrap-e2e-minion-group-qn53\ndownward-api-7889                    8s          Normal    Pulled                       pod/downward-api-69274024-01da-41d0-9b9c-30426cf27f57               Container image \"docker.io/library/busybox:1.29\" already present on machine\ndownward-api-7889                    8s          Normal    Created                      pod/downward-api-69274024-01da-41d0-9b9c-30426cf27f57               Created container dapi-container\ndownward-api-7889                    8s          Normal    Started                      pod/downward-api-69274024-01da-41d0-9b9c-30426cf27f57               Started container dapi-container\nephemeral-1794                       58s         Normal    Pulling                      pod/csi-hostpath-attacher-0                                         Pulling image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\nephemeral-1794                       47s         Normal    Pulled                       pod/csi-hostpath-attacher-0                                         Successfully pulled image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\nephemeral-1794                       46s         Normal    Created                      pod/csi-hostpath-attacher-0                                         Created container csi-attacher\nephemeral-1794                       43s         Normal    Started                      pod/csi-hostpath-attacher-0                                         Started container csi-attacher\nephemeral-1794                       65s         Warning   FailedCreate                 statefulset/csi-hostpath-attacher                                   create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-1794                       63s         Normal    SuccessfulCreate             statefulset/csi-hostpath-attacher                                   create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nephemeral-1794                       58s         Normal    Pulling                      pod/csi-hostpath-provisioner-0                                      Pulling image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\nephemeral-1794                       47s         Normal    Pulled                       pod/csi-hostpath-provisioner-0                                      Successfully pulled image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\nephemeral-1794                       45s         Normal    Created                      pod/csi-hostpath-provisioner-0                                      Created container csi-provisioner\nephemeral-1794                       42s         Normal    Started                      pod/csi-hostpath-provisioner-0                                      Started container csi-provisioner\nephemeral-1794                       65s         Warning   FailedCreate                 statefulset/csi-hostpath-provisioner                                create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-1794                       64s         Normal    SuccessfulCreate             statefulset/csi-hostpath-provisioner                                create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nephemeral-1794                       58s         Normal    Pulling                      pod/csi-hostpath-resizer-0                                          Pulling image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nephemeral-1794                       47s         Normal    Pulled                       pod/csi-hostpath-resizer-0                                          Successfully pulled image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nephemeral-1794                       47s         Normal    Created                      pod/csi-hostpath-resizer-0                                          Created container csi-resizer\nephemeral-1794                       43s         Normal    Started                      pod/csi-hostpath-resizer-0                                          Started container csi-resizer\nephemeral-1794                       65s         Warning   FailedCreate                 statefulset/csi-hostpath-resizer                                    create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-1794                       64s         Normal    SuccessfulCreate             statefulset/csi-hostpath-resizer                                    create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nephemeral-1794                       59s         Normal    Pulling                      pod/csi-hostpathplugin-0                                            Pulling image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\nephemeral-1794                       57s         Normal    Pulled                       pod/csi-hostpathplugin-0                                            Successfully pulled image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\nephemeral-1794                       56s         Normal    Created                      pod/csi-hostpathplugin-0                                            Created container node-driver-registrar\nephemeral-1794                       55s         Normal    Started                      pod/csi-hostpathplugin-0                                            Started container node-driver-registrar\nephemeral-1794                       55s         Normal    Pulling                      pod/csi-hostpathplugin-0                                            Pulling image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nephemeral-1794                       44s         Normal    Pulled                       pod/csi-hostpathplugin-0                                            Successfully pulled image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nephemeral-1794                       44s         Normal    Created                      pod/csi-hostpathplugin-0                                            Created container hostpath\nephemeral-1794                       40s         Normal    Started                      pod/csi-hostpathplugin-0                                            Started container hostpath\nephemeral-1794                       40s         Normal    Pulling                      pod/csi-hostpathplugin-0                                            Pulling image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nephemeral-1794                       37s         Normal    Pulled                       pod/csi-hostpathplugin-0                                            Successfully pulled image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nephemeral-1794                       37s         Normal    Created                      pod/csi-hostpathplugin-0                                            Created container liveness-probe\nephemeral-1794                       35s         Normal    Started                      pod/csi-hostpathplugin-0                                            Started container liveness-probe\nephemeral-1794                       65s         Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                                      create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nephemeral-1794                       58s         Normal    Pulling                      pod/csi-snapshotter-0                                               Pulling image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nephemeral-1794                       48s         Normal    Pulled                       pod/csi-snapshotter-0                                               Successfully pulled image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nephemeral-1794                       47s         Normal    Created                      pod/csi-snapshotter-0                                               Created container csi-snapshotter\nephemeral-1794                       43s         Normal    Started                      pod/csi-snapshotter-0                                               Started container csi-snapshotter\nephemeral-1794                       65s         Warning   FailedCreate                 statefulset/csi-snapshotter                                         create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-1794                       64s         Normal    SuccessfulCreate             statefulset/csi-snapshotter                                         create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nephemeral-1794                       48s         Warning   FailedMount                  pod/inline-volume-tester-6f9st                                      MountVolume.SetUp failed for volume \"my-volume-0\" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name csi-hostpath-ephemeral-1794 not found in the list of registered CSI drivers\nephemeral-1794                       29s         Normal    Pulled                       pod/inline-volume-tester-6f9st                                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nephemeral-1794                       29s         Normal    Created                      pod/inline-volume-tester-6f9st                                      Created container csi-volume-tester\nephemeral-1794                       28s         Normal    Started                      pod/inline-volume-tester-6f9st                                      Started container csi-volume-tester\nephemeral-1794                       18s         Normal    Killing                      pod/inline-volume-tester-6f9st                                      Stopping container csi-volume-tester\nephemeral-4116                       62s         Normal    Pulling                      pod/csi-hostpath-attacher-0                                         Pulling image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\nephemeral-4116                       47s         Normal    Pulled                       pod/csi-hostpath-attacher-0                                         Successfully pulled image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\nephemeral-4116                       45s         Normal    Created                      pod/csi-hostpath-attacher-0                                         Created container csi-attacher\nephemeral-4116                       43s         Normal    Started                      pod/csi-hostpath-attacher-0                                         Started container csi-attacher\nephemeral-4116                       72s         Warning   FailedCreate                 statefulset/csi-hostpath-attacher                                   create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-4116                       71s         Normal    SuccessfulCreate             statefulset/csi-hostpath-attacher                                   create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nephemeral-4116                       62s         Normal    Pulling                      pod/csi-hostpath-provisioner-0                                      Pulling image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\nephemeral-4116                       47s         Normal    Pulled                       pod/csi-hostpath-provisioner-0                                      Successfully pulled image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\nephemeral-4116                       46s         Normal    Created                      pod/csi-hostpath-provisioner-0                                      Created container csi-provisioner\nephemeral-4116                       42s         Normal    Started                      pod/csi-hostpath-provisioner-0                                      Started container csi-provisioner\nephemeral-4116                       72s         Warning   FailedCreate                 statefulset/csi-hostpath-provisioner                                create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-4116                       71s         Normal    SuccessfulCreate             statefulset/csi-hostpath-provisioner                                create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nephemeral-4116                       61s         Normal    Pulling                      pod/csi-hostpath-resizer-0                                          Pulling image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nephemeral-4116                       47s         Normal    Pulled                       pod/csi-hostpath-resizer-0                                          Successfully pulled image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nephemeral-4116                       47s         Normal    Created                      pod/csi-hostpath-resizer-0                                          Created container csi-resizer\nephemeral-4116                       43s         Normal    Started                      pod/csi-hostpath-resizer-0                                          Started container csi-resizer\nephemeral-4116                       71s         Warning   FailedCreate                 statefulset/csi-hostpath-resizer                                    create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-4116                       71s         Normal    SuccessfulCreate             statefulset/csi-hostpath-resizer                                    create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nephemeral-4116                       66s         Normal    Pulling                      pod/csi-hostpathplugin-0                                            Pulling image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\nephemeral-4116                       57s         Normal    Pulled                       pod/csi-hostpathplugin-0                                            Successfully pulled image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\nephemeral-4116                       56s         Normal    Created                      pod/csi-hostpathplugin-0                                            Created container node-driver-registrar\nephemeral-4116                       56s         Normal    Started                      pod/csi-hostpathplugin-0                                            Started container node-driver-registrar\nephemeral-4116                       56s         Normal    Pulling                      pod/csi-hostpathplugin-0                                            Pulling image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nephemeral-4116                       44s         Normal    Pulled                       pod/csi-hostpathplugin-0                                            Successfully pulled image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nephemeral-4116                       44s         Normal    Created                      pod/csi-hostpathplugin-0                                            Created container hostpath\nephemeral-4116                       41s         Normal    Started                      pod/csi-hostpathplugin-0                                            Started container hostpath\nephemeral-4116                       41s         Normal    Pulling                      pod/csi-hostpathplugin-0                                            Pulling image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nephemeral-4116                       37s         Normal    Pulled                       pod/csi-hostpathplugin-0                                            Successfully pulled image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nephemeral-4116                       37s         Normal    Created                      pod/csi-hostpathplugin-0                                            Created container liveness-probe\nephemeral-4116                       35s         Normal    Started                      pod/csi-hostpathplugin-0                                            Started container liveness-probe\nephemeral-4116                       72s         Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                                      create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nephemeral-4116                       64s         Normal    Pulling                      pod/csi-snapshotter-0                                               Pulling image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nephemeral-4116                       49s         Normal    Pulled                       pod/csi-snapshotter-0                                               Successfully pulled image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nephemeral-4116                       47s         Normal    Created                      pod/csi-snapshotter-0                                               Created container csi-snapshotter\nephemeral-4116                       43s         Normal    Started                      pod/csi-snapshotter-0                                               Started container csi-snapshotter\nephemeral-4116                       71s         Warning   FailedCreate                 statefulset/csi-snapshotter                                         create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-4116                       71s         Normal    SuccessfulCreate             statefulset/csi-snapshotter                                         create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nephemeral-4116                       55s         Warning   FailedMount                  pod/inline-volume-tester-s57ft                                      MountVolume.SetUp failed for volume \"my-volume-1\" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name csi-hostpath-ephemeral-4116 not found in the list of registered CSI drivers\nephemeral-4116                       55s         Warning   FailedMount                  pod/inline-volume-tester-s57ft                                      MountVolume.SetUp failed for volume \"my-volume-0\" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name csi-hostpath-ephemeral-4116 not found in the list of registered CSI drivers\nephemeral-4116                       36s         Normal    Pulling                      pod/inline-volume-tester-s57ft                                      Pulling image \"docker.io/library/busybox:1.29\"\nephemeral-4116                       33s         Normal    Pulled                       pod/inline-volume-tester-s57ft                                      Successfully pulled image \"docker.io/library/busybox:1.29\"\nephemeral-4116                       33s         Normal    Created                      pod/inline-volume-tester-s57ft                                      Created container csi-volume-tester\nephemeral-4116                       32s         Normal    Started                      pod/inline-volume-tester-s57ft                                      Started container csi-volume-tester\nephemeral-4116                       27s         Normal    Killing                      pod/inline-volume-tester-s57ft                                      Stopping container csi-volume-tester\ninit-container-7415                  16s         Normal    Scheduled                    pod/pod-init-a6d6f31a-10cf-490f-9efd-27a91145d0ed                   Successfully assigned init-container-7415/pod-init-a6d6f31a-10cf-490f-9efd-27a91145d0ed to bootstrap-e2e-minion-group-qn53\ninit-container-7415                  13s         Normal    Pulled                       pod/pod-init-a6d6f31a-10cf-490f-9efd-27a91145d0ed                   Container image \"docker.io/library/busybox:1.29\" already present on machine\ninit-container-7415                  12s         Normal    Created                      pod/pod-init-a6d6f31a-10cf-490f-9efd-27a91145d0ed                   Created container init1\ninit-container-7415                  11s         Normal    Started                      pod/pod-init-a6d6f31a-10cf-490f-9efd-27a91145d0ed                   Started container init1\ninit-container-7415                  9s          Warning   BackOff                      pod/pod-init-a6d6f31a-10cf-490f-9efd-27a91145d0ed                   Back-off restarting failed container\nkube-system                          28s         Normal    Scheduled                    pod/coredns-65567c7b57-6q8sq                                        Successfully assigned kube-system/coredns-65567c7b57-6q8sq to bootstrap-e2e-minion-group-q10p\nkube-system                          25s         Normal    Pulling                      pod/coredns-65567c7b57-6q8sq                                        Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          17s         Normal    Pulled                       pod/coredns-65567c7b57-6q8sq                                        Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          17s         Warning   Failed                       pod/coredns-65567c7b57-6q8sq                                        Error: cannot find volume \"config-volume\" to mount into container \"coredns\"\nkube-system                          4m36s       Warning   FailedScheduling             pod/coredns-65567c7b57-kdfdw                                        no nodes available to schedule pods\nkube-system                          4m28s       Warning   FailedScheduling             pod/coredns-65567c7b57-kdfdw                                        0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          4m25s       Warning   FailedScheduling             pod/coredns-65567c7b57-kdfdw                                        0/4 nodes are available: 1 node(s) were unschedulable, 3 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m12s       Warning   FailedScheduling             pod/coredns-65567c7b57-kdfdw                                        0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m1s        Normal    Scheduled                    pod/coredns-65567c7b57-kdfdw                                        Successfully assigned kube-system/coredns-65567c7b57-kdfdw to bootstrap-e2e-minion-group-qn53\nkube-system                          4m          Normal    Pulling                      pod/coredns-65567c7b57-kdfdw                                        Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          3m58s       Normal    Pulled                       pod/coredns-65567c7b57-kdfdw                                        Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          3m58s       Normal    Created                      pod/coredns-65567c7b57-kdfdw                                        Created container coredns\nkube-system                          3m58s       Normal    Started                      pod/coredns-65567c7b57-kdfdw                                        Started container coredns\nkube-system                          4m2s        Normal    Scheduled                    pod/coredns-65567c7b57-n7vgj                                        Successfully assigned kube-system/coredns-65567c7b57-n7vgj to bootstrap-e2e-minion-group-qkcq\nkube-system                          4m1s        Normal    Pulling                      pod/coredns-65567c7b57-n7vgj                                        Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          3m59s       Normal    Pulled                       pod/coredns-65567c7b57-n7vgj                                        Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          3m59s       Normal    Created                      pod/coredns-65567c7b57-n7vgj                                        Created container coredns\nkube-system                          3m59s       Normal    Started                      pod/coredns-65567c7b57-n7vgj                                        Started container coredns\nkube-system                          53s         Normal    Scheduled                    pod/coredns-65567c7b57-t4vzb                                        Successfully assigned kube-system/coredns-65567c7b57-t4vzb to bootstrap-e2e-minion-group-vrtv\nkube-system                          51s         Normal    Pulling                      pod/coredns-65567c7b57-t4vzb                                        Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          48s         Normal    Pulled                       pod/coredns-65567c7b57-t4vzb                                        Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          48s         Normal    Created                      pod/coredns-65567c7b57-t4vzb                                        Created container coredns\nkube-system                          47s         Normal    Started                      pod/coredns-65567c7b57-t4vzb                                        Started container coredns\nkube-system                          43s         Normal    Killing                      pod/coredns-65567c7b57-t4vzb                                        Stopping container coredns\nkube-system                          28s         Normal    Scheduled                    pod/coredns-65567c7b57-xvmds                                        Successfully assigned kube-system/coredns-65567c7b57-xvmds to bootstrap-e2e-minion-group-vrtv\nkube-system                          26s         Normal    Pulled                       pod/coredns-65567c7b57-xvmds                                        Container image \"k8s.gcr.io/coredns:1.6.5\" already present on machine\nkube-system                          25s         Normal    Created                      pod/coredns-65567c7b57-xvmds                                        Created container coredns\nkube-system                          24s         Normal    Started                      pod/coredns-65567c7b57-xvmds                                        Started container coredns\nkube-system                          20s         Normal    Killing                      pod/coredns-65567c7b57-xvmds                                        Stopping container coredns\nkube-system                          54s         Normal    Scheduled                    pod/coredns-65567c7b57-zhl2f                                        Successfully assigned kube-system/coredns-65567c7b57-zhl2f to bootstrap-e2e-minion-group-vrtv\nkube-system                          51s         Normal    Pulling                      pod/coredns-65567c7b57-zhl2f                                        Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          48s         Normal    Pulled                       pod/coredns-65567c7b57-zhl2f                                        Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          48s         Normal    Created                      pod/coredns-65567c7b57-zhl2f                                        Created container coredns\nkube-system                          47s         Normal    Started                      pod/coredns-65567c7b57-zhl2f                                        Started container coredns\nkube-system                          34s         Normal    Killing                      pod/coredns-65567c7b57-zhl2f                                        Stopping container coredns\nkube-system                          31s         Warning   Unhealthy                    pod/coredns-65567c7b57-zhl2f                                        Readiness probe failed: Get http://10.64.4.16:8181/ready: dial tcp 10.64.4.16:8181: connect: connection refused\nkube-system                          4m41s       Warning   FailedCreate                 replicaset/coredns-65567c7b57                                       Error creating: pods \"coredns-65567c7b57-\" is forbidden: no providers available to validate pod request\nkube-system                          4m38s       Warning   FailedCreate                 replicaset/coredns-65567c7b57                                       Error creating: pods \"coredns-65567c7b57-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          4m36s       Normal    SuccessfulCreate             replicaset/coredns-65567c7b57                                       Created pod: coredns-65567c7b57-kdfdw\nkube-system                          4m2s        Normal    SuccessfulCreate             replicaset/coredns-65567c7b57                                       Created pod: coredns-65567c7b57-n7vgj\nkube-system                          54s         Normal    SuccessfulCreate             replicaset/coredns-65567c7b57                                       Created pod: coredns-65567c7b57-zhl2f\nkube-system                          54s         Normal    SuccessfulCreate             replicaset/coredns-65567c7b57                                       Created pod: coredns-65567c7b57-t4vzb\nkube-system                          44s         Normal    SuccessfulDelete             replicaset/coredns-65567c7b57                                       Deleted pod: coredns-65567c7b57-t4vzb\nkube-system                          34s         Normal    SuccessfulDelete             replicaset/coredns-65567c7b57                                       Deleted pod: coredns-65567c7b57-zhl2f\nkube-system                          29s         Normal    SuccessfulCreate             replicaset/coredns-65567c7b57                                       Created pod: coredns-65567c7b57-xvmds\nkube-system                          28s         Normal    SuccessfulCreate             replicaset/coredns-65567c7b57                                       Created pod: coredns-65567c7b57-6q8sq\nkube-system                          20s         Normal    SuccessfulDelete             replicaset/coredns-65567c7b57                                       Deleted pod: coredns-65567c7b57-6q8sq\nkube-system                          20s         Normal    SuccessfulDelete             replicaset/coredns-65567c7b57                                       Deleted pod: coredns-65567c7b57-xvmds\nkube-system                          4m41s       Normal    ScalingReplicaSet            deployment/coredns                                                  Scaled up replica set coredns-65567c7b57 to 1\nkube-system                          4m2s        Normal    ScalingReplicaSet            deployment/coredns                                                  Scaled up replica set coredns-65567c7b57 to 2\nkube-system                          29s         Normal    ScalingReplicaSet            deployment/coredns                                                  Scaled up replica set coredns-65567c7b57 to 4\nkube-system                          44s         Normal    ScalingReplicaSet            deployment/coredns                                                  Scaled down replica set coredns-65567c7b57 to 3\nkube-system                          20s         Normal    ScalingReplicaSet            deployment/coredns                                                  Scaled down replica set coredns-65567c7b57 to 2\nkube-system                          4m37s       Warning   FailedScheduling             pod/event-exporter-v0.3.1-747b47fcd-js4fh                           no nodes available to schedule pods\nkube-system                          4m27s       Warning   FailedScheduling             pod/event-exporter-v0.3.1-747b47fcd-js4fh                           0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          4m13s       Warning   FailedScheduling             pod/event-exporter-v0.3.1-747b47fcd-js4fh                           0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m2s        Normal    Scheduled                    pod/event-exporter-v0.3.1-747b47fcd-js4fh                           Successfully assigned kube-system/event-exporter-v0.3.1-747b47fcd-js4fh to bootstrap-e2e-minion-group-q10p\nkube-system                          4m          Normal    Pulling                      pod/event-exporter-v0.3.1-747b47fcd-js4fh                           Pulling image \"k8s.gcr.io/event-exporter:v0.3.1\"\nkube-system                          3m57s       Normal    Pulled                       pod/event-exporter-v0.3.1-747b47fcd-js4fh                           Successfully pulled image \"k8s.gcr.io/event-exporter:v0.3.1\"\nkube-system                          3m57s       Normal    Created                      pod/event-exporter-v0.3.1-747b47fcd-js4fh                           Created container event-exporter\nkube-system                          3m56s       Normal    Started                      pod/event-exporter-v0.3.1-747b47fcd-js4fh                           Started container event-exporter\nkube-system                          3m56s       Normal    Pulling                      pod/event-exporter-v0.3.1-747b47fcd-js4fh                           Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.7.2\"\nkube-system                          3m54s       Normal    Pulled                       pod/event-exporter-v0.3.1-747b47fcd-js4fh                           Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.7.2\"\nkube-system                          3m54s       Normal    Created                      pod/event-exporter-v0.3.1-747b47fcd-js4fh                           Created container prometheus-to-sd-exporter\nkube-system                          3m54s       Normal    Started                      pod/event-exporter-v0.3.1-747b47fcd-js4fh                           Started container prometheus-to-sd-exporter\nkube-system                          4m41s       Normal    SuccessfulCreate             replicaset/event-exporter-v0.3.1-747b47fcd                          Created pod: event-exporter-v0.3.1-747b47fcd-js4fh\nkube-system                          4m41s       Normal    ScalingReplicaSet            deployment/event-exporter-v0.3.1                                    Scaled up replica set event-exporter-v0.3.1-747b47fcd to 1\nkube-system                          4m35s       Warning   FailedScheduling             pod/fluentd-gcp-scaler-76d9c77b4d-sk5bz                             no nodes available to schedule pods\nkube-system                          4m27s       Warning   FailedScheduling             pod/fluentd-gcp-scaler-76d9c77b4d-sk5bz                             0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          4m16s       Warning   FailedScheduling             pod/fluentd-gcp-scaler-76d9c77b4d-sk5bz                             0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m8s        Normal    Scheduled                    pod/fluentd-gcp-scaler-76d9c77b4d-sk5bz                             Successfully assigned kube-system/fluentd-gcp-scaler-76d9c77b4d-sk5bz to bootstrap-e2e-minion-group-vrtv\nkube-system                          4m7s        Normal    Pulling                      pod/fluentd-gcp-scaler-76d9c77b4d-sk5bz                             Pulling image \"k8s.gcr.io/fluentd-gcp-scaler:0.5.2\"\nkube-system                          4m3s        Normal    Pulled                       pod/fluentd-gcp-scaler-76d9c77b4d-sk5bz                             Successfully pulled image \"k8s.gcr.io/fluentd-gcp-scaler:0.5.2\"\nkube-system                          4m2s        Normal    Created                      pod/fluentd-gcp-scaler-76d9c77b4d-sk5bz                             Created container fluentd-gcp-scaler\nkube-system                          4m2s        Normal    Started                      pod/fluentd-gcp-scaler-76d9c77b4d-sk5bz                             Started container fluentd-gcp-scaler\nkube-system                          4m35s       Normal    SuccessfulCreate             replicaset/fluentd-gcp-scaler-76d9c77b4d                            Created pod: fluentd-gcp-scaler-76d9c77b4d-sk5bz\nkube-system                          4m35s       Normal    ScalingReplicaSet            deployment/fluentd-gcp-scaler                                       Scaled up replica set fluentd-gcp-scaler-76d9c77b4d to 1\nkube-system                          3m17s       Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-6tspz                                        Successfully assigned kube-system/fluentd-gcp-v3.2.0-6tspz to bootstrap-e2e-master\nkube-system                          3m16s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-6tspz                                        Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          3m16s       Normal    Created                      pod/fluentd-gcp-v3.2.0-6tspz                                        Created container fluentd-gcp\nkube-system                          3m15s       Normal    Started                      pod/fluentd-gcp-v3.2.0-6tspz                                        Started container fluentd-gcp\nkube-system                          3m15s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-6tspz                                        Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          3m15s       Normal    Created                      pod/fluentd-gcp-v3.2.0-6tspz                                        Created container prometheus-to-sd-exporter\nkube-system                          3m12s       Normal    Started                      pod/fluentd-gcp-v3.2.0-6tspz                                        Started container prometheus-to-sd-exporter\nkube-system                          4m24s       Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-g8wd5                                        Successfully assigned kube-system/fluentd-gcp-v3.2.0-g8wd5 to bootstrap-e2e-minion-group-qkcq\nkube-system                          4m23s       Warning   FailedMount                  pod/fluentd-gcp-v3.2.0-g8wd5                                        MountVolume.SetUp failed for volume \"config-volume\" : failed to sync configmap cache: timed out waiting for the condition\nkube-system                          4m23s       Warning   FailedMount                  pod/fluentd-gcp-v3.2.0-g8wd5                                        MountVolume.SetUp failed for volume \"fluentd-gcp-token-5vkfw\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          4m22s       Normal    Pulling                      pod/fluentd-gcp-v3.2.0-g8wd5                                        Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m13s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-g8wd5                                        Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m12s       Normal    Created                      pod/fluentd-gcp-v3.2.0-g8wd5                                        Created container fluentd-gcp\nkube-system                          4m12s       Normal    Started                      pod/fluentd-gcp-v3.2.0-g8wd5                                        Started container fluentd-gcp\nkube-system                          4m12s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-g8wd5                                        Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          4m12s       Normal    Created                      pod/fluentd-gcp-v3.2.0-g8wd5                                        Created container prometheus-to-sd-exporter\nkube-system                          4m11s       Normal    Started                      pod/fluentd-gcp-v3.2.0-g8wd5                                        Started container prometheus-to-sd-exporter\nkube-system                          3m34s       Normal    Killing                      pod/fluentd-gcp-v3.2.0-g8wd5                                        Stopping container fluentd-gcp\nkube-system                          3m34s       Normal    Killing                      pod/fluentd-gcp-v3.2.0-g8wd5                                        Stopping container prometheus-to-sd-exporter\nkube-system                          4m25s       Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-hvwts                                        Successfully assigned kube-system/fluentd-gcp-v3.2.0-hvwts to bootstrap-e2e-minion-group-q10p\nkube-system                          4m24s       Warning   FailedMount                  pod/fluentd-gcp-v3.2.0-hvwts                                        MountVolume.SetUp failed for volume \"config-volume\" : failed to sync configmap cache: timed out waiting for the condition\nkube-system                          4m24s       Warning   FailedMount                  pod/fluentd-gcp-v3.2.0-hvwts                                        MountVolume.SetUp failed for volume \"fluentd-gcp-token-5vkfw\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          4m23s       Normal    Pulling                      pod/fluentd-gcp-v3.2.0-hvwts                                        Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m12s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-hvwts                                        Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m11s       Normal    Created                      pod/fluentd-gcp-v3.2.0-hvwts                                        Created container fluentd-gcp\nkube-system                          4m11s       Normal    Started                      pod/fluentd-gcp-v3.2.0-hvwts                                        Started container fluentd-gcp\nkube-system                          4m11s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-hvwts                                        Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          4m11s       Normal    Created                      pod/fluentd-gcp-v3.2.0-hvwts                                        Created container prometheus-to-sd-exporter\nkube-system                          4m10s       Normal    Started                      pod/fluentd-gcp-v3.2.0-hvwts                                        Started container prometheus-to-sd-exporter\nkube-system                          4m          Normal    Killing                      pod/fluentd-gcp-v3.2.0-hvwts                                        Stopping container fluentd-gcp\nkube-system                          4m          Normal    Killing                      pod/fluentd-gcp-v3.2.0-hvwts                                        Stopping container prometheus-to-sd-exporter\nkube-system                          4m29s       Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-m4h9z                                        Successfully assigned kube-system/fluentd-gcp-v3.2.0-m4h9z to bootstrap-e2e-master\nkube-system                          4m20s       Normal    Pulling                      pod/fluentd-gcp-v3.2.0-m4h9z                                        Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m6s        Normal    Pulled                       pod/fluentd-gcp-v3.2.0-m4h9z                                        Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m2s        Normal    Created                      pod/fluentd-gcp-v3.2.0-m4h9z                                        Created container fluentd-gcp\nkube-system                          4m2s        Normal    Started                      pod/fluentd-gcp-v3.2.0-m4h9z                                        Started container fluentd-gcp\nkube-system                          4m2s        Normal    Pulled                       pod/fluentd-gcp-v3.2.0-m4h9z                                        Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          4m2s        Normal    Created                      pod/fluentd-gcp-v3.2.0-m4h9z                                        Created container prometheus-to-sd-exporter\nkube-system                          4m1s        Normal    Started                      pod/fluentd-gcp-v3.2.0-m4h9z                                        Started container prometheus-to-sd-exporter\nkube-system                          3m24s       Normal    Killing                      pod/fluentd-gcp-v3.2.0-m4h9z                                        Stopping container fluentd-gcp\nkube-system                          3m24s       Normal    Killing                      pod/fluentd-gcp-v3.2.0-m4h9z                                        Stopping container prometheus-to-sd-exporter\nkube-system                          4m24s       Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-mw4rn                                        Successfully assigned kube-system/fluentd-gcp-v3.2.0-mw4rn to bootstrap-e2e-minion-group-vrtv\nkube-system                          4m23s       Normal    Pulling                      pod/fluentd-gcp-v3.2.0-mw4rn                                        Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m13s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-mw4rn                                        Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m13s       Normal    Created                      pod/fluentd-gcp-v3.2.0-mw4rn                                        Created container fluentd-gcp\nkube-system                          4m13s       Normal    Started                      pod/fluentd-gcp-v3.2.0-mw4rn                                        Started container fluentd-gcp\nkube-system                          4m13s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-mw4rn                                        Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          4m13s       Normal    Created                      pod/fluentd-gcp-v3.2.0-mw4rn                                        Created container prometheus-to-sd-exporter\nkube-system                          4m13s       Normal    Started                      pod/fluentd-gcp-v3.2.0-mw4rn                                        Started container prometheus-to-sd-exporter\nkube-system                          3m48s       Normal    Killing                      pod/fluentd-gcp-v3.2.0-mw4rn                                        Stopping container fluentd-gcp\nkube-system                          3m48s       Normal    Killing                      pod/fluentd-gcp-v3.2.0-mw4rn                                        Stopping container prometheus-to-sd-exporter\nkube-system                          3m36s       Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-mxnmk                                        Successfully assigned kube-system/fluentd-gcp-v3.2.0-mxnmk to bootstrap-e2e-minion-group-vrtv\nkube-system                          3m35s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-mxnmk                                        Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          3m35s       Normal    Created                      pod/fluentd-gcp-v3.2.0-mxnmk                                        Created container fluentd-gcp\nkube-system                          3m35s       Normal    Started                      pod/fluentd-gcp-v3.2.0-mxnmk                                        Started container fluentd-gcp\nkube-system                          3m35s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-mxnmk                                        Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          3m35s       Normal    Created                      pod/fluentd-gcp-v3.2.0-mxnmk                                        Created container prometheus-to-sd-exporter\nkube-system                          3m34s       Normal    Started                      pod/fluentd-gcp-v3.2.0-mxnmk                                        Started container prometheus-to-sd-exporter\nkube-system                          4m25s       Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-pfpj2                                        Successfully assigned kube-system/fluentd-gcp-v3.2.0-pfpj2 to bootstrap-e2e-minion-group-qn53\nkube-system                          4m24s       Warning   FailedMount                  pod/fluentd-gcp-v3.2.0-pfpj2                                        MountVolume.SetUp failed for volume \"fluentd-gcp-token-5vkfw\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          4m24s       Warning   FailedMount                  pod/fluentd-gcp-v3.2.0-pfpj2                                        MountVolume.SetUp failed for volume \"config-volume\" : failed to sync configmap cache: timed out waiting for the condition\nkube-system                          4m23s       Normal    Pulling                      pod/fluentd-gcp-v3.2.0-pfpj2                                        Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m14s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-pfpj2                                        Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m12s       Normal    Created                      pod/fluentd-gcp-v3.2.0-pfpj2                                        Created container fluentd-gcp\nkube-system                          4m12s       Normal    Started                      pod/fluentd-gcp-v3.2.0-pfpj2                                        Started container fluentd-gcp\nkube-system                          4m12s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-pfpj2                                        Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          4m11s       Normal    Created                      pod/fluentd-gcp-v3.2.0-pfpj2                                        Created container prometheus-to-sd-exporter\nkube-system                          4m11s       Normal    Started                      pod/fluentd-gcp-v3.2.0-pfpj2                                        Started container prometheus-to-sd-exporter\nkube-system                          3m11s       Normal    Killing                      pod/fluentd-gcp-v3.2.0-pfpj2                                        Stopping container fluentd-gcp\nkube-system                          3m11s       Normal    Killing                      pod/fluentd-gcp-v3.2.0-pfpj2                                        Stopping container prometheus-to-sd-exporter\nkube-system                          2m56s       Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-t6mk4                                        Successfully assigned kube-system/fluentd-gcp-v3.2.0-t6mk4 to bootstrap-e2e-minion-group-qn53\nkube-system                          2m56s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-t6mk4                                        Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          2m56s       Normal    Created                      pod/fluentd-gcp-v3.2.0-t6mk4                                        Created container fluentd-gcp\nkube-system                          2m56s       Normal    Started                      pod/fluentd-gcp-v3.2.0-t6mk4                                        Started container fluentd-gcp\nkube-system                          2m56s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-t6mk4                                        Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          2m55s       Normal    Created                      pod/fluentd-gcp-v3.2.0-t6mk4                                        Created container prometheus-to-sd-exporter\nkube-system                          2m54s       Normal    Started                      pod/fluentd-gcp-v3.2.0-t6mk4                                        Started container prometheus-to-sd-exporter\nkube-system                          3m50s       Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-vqmcb                                        Successfully assigned kube-system/fluentd-gcp-v3.2.0-vqmcb to bootstrap-e2e-minion-group-q10p\nkube-system                          3m49s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-vqmcb                                        Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          3m49s       Normal    Created                      pod/fluentd-gcp-v3.2.0-vqmcb                                        Created container fluentd-gcp\nkube-system                          3m49s       Normal    Started                      pod/fluentd-gcp-v3.2.0-vqmcb                                        Started container fluentd-gcp\nkube-system                          3m49s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-vqmcb                                        Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          3m49s       Normal    Created                      pod/fluentd-gcp-v3.2.0-vqmcb                                        Created container prometheus-to-sd-exporter\nkube-system                          3m48s       Normal    Started                      pod/fluentd-gcp-v3.2.0-vqmcb                                        Started container prometheus-to-sd-exporter\nkube-system                          3m26s       Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-zcg6h                                        Successfully assigned kube-system/fluentd-gcp-v3.2.0-zcg6h to bootstrap-e2e-minion-group-qkcq\nkube-system                          3m25s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-zcg6h                                        Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          3m25s       Normal    Created                      pod/fluentd-gcp-v3.2.0-zcg6h                                        Created container fluentd-gcp\nkube-system                          3m25s       Normal    Started                      pod/fluentd-gcp-v3.2.0-zcg6h                                        Started container fluentd-gcp\nkube-system                          3m25s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-zcg6h                                        Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          3m25s       Normal    Created                      pod/fluentd-gcp-v3.2.0-zcg6h                                        Created container prometheus-to-sd-exporter\nkube-system                          3m24s       Normal    Started                      pod/fluentd-gcp-v3.2.0-zcg6h                                        Started container prometheus-to-sd-exporter\nkube-system                          4m30s       Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                        Created pod: fluentd-gcp-v3.2.0-m4h9z\nkube-system                          4m26s       Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                        Created pod: fluentd-gcp-v3.2.0-pfpj2\nkube-system                          4m26s       Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                        Created pod: fluentd-gcp-v3.2.0-hvwts\nkube-system                          4m25s       Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                        Created pod: fluentd-gcp-v3.2.0-g8wd5\nkube-system                          4m24s       Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                        Created pod: fluentd-gcp-v3.2.0-mw4rn\nkube-system                          4m          Normal    SuccessfulDelete             daemonset/fluentd-gcp-v3.2.0                                        Deleted pod: fluentd-gcp-v3.2.0-hvwts\nkube-system                          3m50s       Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                        Created pod: fluentd-gcp-v3.2.0-vqmcb\nkube-system                          3m48s       Normal    SuccessfulDelete             daemonset/fluentd-gcp-v3.2.0                                        Deleted pod: fluentd-gcp-v3.2.0-mw4rn\nkube-system                          3m36s       Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                        Created pod: fluentd-gcp-v3.2.0-mxnmk\nkube-system                          3m34s       Normal    SuccessfulDelete             daemonset/fluentd-gcp-v3.2.0                                        Deleted pod: fluentd-gcp-v3.2.0-g8wd5\nkube-system                          3m26s       Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                        Created pod: fluentd-gcp-v3.2.0-zcg6h\nkube-system                          3m24s       Normal    SuccessfulDelete             daemonset/fluentd-gcp-v3.2.0                                        Deleted pod: fluentd-gcp-v3.2.0-m4h9z\nkube-system                          3m17s       Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                        Created pod: fluentd-gcp-v3.2.0-6tspz\nkube-system                          3m11s       Normal    SuccessfulDelete             daemonset/fluentd-gcp-v3.2.0                                        Deleted pod: fluentd-gcp-v3.2.0-pfpj2\nkube-system                          2m56s       Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                        (combined from similar events): Created pod: fluentd-gcp-v3.2.0-t6mk4\nkube-system                          4m19s       Normal    LeaderElection               configmap/ingress-gce-lock                                          bootstrap-e2e-master_81ba0 became leader\nkube-system                          5m          Normal    LeaderElection               endpoints/kube-controller-manager                                   bootstrap-e2e-master_197334f0-6e8d-4b10-b666-0e8fc3e0a58b became leader\nkube-system                          5m          Normal    LeaderElection               lease/kube-controller-manager                                       bootstrap-e2e-master_197334f0-6e8d-4b10-b666-0e8fc3e0a58b became leader\nkube-system                          32s         Normal    Scheduled                    pod/kube-dns-autoscaler-65bc6d4889-c4f5l                            Successfully assigned kube-system/kube-dns-autoscaler-65bc6d4889-c4f5l to bootstrap-e2e-minion-group-qkcq\nkube-system                          31s         Normal    Pulled                       pod/kube-dns-autoscaler-65bc6d4889-c4f5l                            Container image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\" already present on machine\nkube-system                          31s         Normal    Created                      pod/kube-dns-autoscaler-65bc6d4889-c4f5l                            Created container autoscaler\nkube-system                          30s         Normal    Started                      pod/kube-dns-autoscaler-65bc6d4889-c4f5l                            Started container autoscaler\nkube-system                          4m30s       Warning   FailedScheduling             pod/kube-dns-autoscaler-65bc6d4889-sqctq                            no nodes available to schedule pods\nkube-system                          4m28s       Warning   FailedScheduling             pod/kube-dns-autoscaler-65bc6d4889-sqctq                            0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          4m16s       Warning   FailedScheduling             pod/kube-dns-autoscaler-65bc6d4889-sqctq                            0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m8s        Normal    Scheduled                    pod/kube-dns-autoscaler-65bc6d4889-sqctq                            Successfully assigned kube-system/kube-dns-autoscaler-65bc6d4889-sqctq to bootstrap-e2e-minion-group-qkcq\nkube-system                          4m7s        Normal    Pulling                      pod/kube-dns-autoscaler-65bc6d4889-sqctq                            Pulling image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\"\nkube-system                          4m5s        Normal    Pulled                       pod/kube-dns-autoscaler-65bc6d4889-sqctq                            Successfully pulled image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\"\nkube-system                          4m5s        Normal    Created                      pod/kube-dns-autoscaler-65bc6d4889-sqctq                            Created container autoscaler\nkube-system                          4m5s        Normal    Started                      pod/kube-dns-autoscaler-65bc6d4889-sqctq                            Started container autoscaler\nkube-system                          32s         Normal    Killing                      pod/kube-dns-autoscaler-65bc6d4889-sqctq                            Stopping container autoscaler\nkube-system                          4m35s       Warning   FailedCreate                 replicaset/kube-dns-autoscaler-65bc6d4889                           Error creating: pods \"kube-dns-autoscaler-65bc6d4889-\" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount \"kube-dns-autoscaler\" not found\nkube-system                          4m30s       Normal    SuccessfulCreate             replicaset/kube-dns-autoscaler-65bc6d4889                           Created pod: kube-dns-autoscaler-65bc6d4889-sqctq\nkube-system                          32s         Normal    SuccessfulCreate             replicaset/kube-dns-autoscaler-65bc6d4889                           Created pod: kube-dns-autoscaler-65bc6d4889-c4f5l\nkube-system                          4m41s       Normal    ScalingReplicaSet            deployment/kube-dns-autoscaler                                      Scaled up replica set kube-dns-autoscaler-65bc6d4889 to 1\nkube-system                          4m25s       Normal    Pulled                       pod/kube-proxy-bootstrap-e2e-minion-group-q10p                      Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.755_05209312b74eac\" already present on machine\nkube-system                          4m25s       Normal    Created                      pod/kube-proxy-bootstrap-e2e-minion-group-q10p                      Created container kube-proxy\nkube-system                          4m24s       Normal    Started                      pod/kube-proxy-bootstrap-e2e-minion-group-q10p                      Started container kube-proxy\nkube-system                          4m24s       Normal    Pulled                       pod/kube-proxy-bootstrap-e2e-minion-group-qkcq                      Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.755_05209312b74eac\" already present on machine\nkube-system                          4m24s       Normal    Created                      pod/kube-proxy-bootstrap-e2e-minion-group-qkcq                      Created container kube-proxy\nkube-system                          4m24s       Normal    Started                      pod/kube-proxy-bootstrap-e2e-minion-group-qkcq                      Started container kube-proxy\nkube-system                          4m25s       Normal    Pulled                       pod/kube-proxy-bootstrap-e2e-minion-group-qn53                      Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.755_05209312b74eac\" already present on machine\nkube-system                          4m25s       Normal    Created                      pod/kube-proxy-bootstrap-e2e-minion-group-qn53                      Created container kube-proxy\nkube-system                          4m25s       Normal    Started                      pod/kube-proxy-bootstrap-e2e-minion-group-qn53                      Started container kube-proxy\nkube-system                          4m23s       Normal    Pulled                       pod/kube-proxy-bootstrap-e2e-minion-group-vrtv                      Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.755_05209312b74eac\" already present on machine\nkube-system                          4m23s       Normal    Created                      pod/kube-proxy-bootstrap-e2e-minion-group-vrtv                      Created container kube-proxy\nkube-system                          4m23s       Normal    Started                      pod/kube-proxy-bootstrap-e2e-minion-group-vrtv                      Started container kube-proxy\nkube-system                          4m59s       Normal    LeaderElection               endpoints/kube-scheduler                                            bootstrap-e2e-master_5d7b243b-8849-4a10-baf7-fc0a85897178 became leader\nkube-system                          4m59s       Normal    LeaderElection               lease/kube-scheduler                                                bootstrap-e2e-master_5d7b243b-8849-4a10-baf7-fc0a85897178 became leader\nkube-system                          4m35s       Warning   FailedScheduling             pod/kubernetes-dashboard-7778f8b456-wjltm                           no nodes available to schedule pods\nkube-system                          4m29s       Warning   FailedScheduling             pod/kubernetes-dashboard-7778f8b456-wjltm                           0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          4m26s       Warning   FailedScheduling             pod/kubernetes-dashboard-7778f8b456-wjltm                           0/2 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 1 node(s) were unschedulable.\nkube-system                          4m11s       Warning   FailedScheduling             pod/kubernetes-dashboard-7778f8b456-wjltm                           0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m          Normal    Scheduled                    pod/kubernetes-dashboard-7778f8b456-wjltm                           Successfully assigned kube-system/kubernetes-dashboard-7778f8b456-wjltm to bootstrap-e2e-minion-group-qkcq\nkube-system                          3m59s       Normal    Pulling                      pod/kubernetes-dashboard-7778f8b456-wjltm                           Pulling image \"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\"\nkube-system                          3m56s       Normal    Pulled                       pod/kubernetes-dashboard-7778f8b456-wjltm                           Successfully pulled image \"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\"\nkube-system                          3m54s       Normal    Created                      pod/kubernetes-dashboard-7778f8b456-wjltm                           Created container kubernetes-dashboard\nkube-system                          3m54s       Normal    Started                      pod/kubernetes-dashboard-7778f8b456-wjltm                           Started container kubernetes-dashboard\nkube-system                          4m35s       Normal    SuccessfulCreate             replicaset/kubernetes-dashboard-7778f8b456                          Created pod: kubernetes-dashboard-7778f8b456-wjltm\nkube-system                          4m35s       Normal    ScalingReplicaSet            deployment/kubernetes-dashboard                                     Scaled up replica set kubernetes-dashboard-7778f8b456 to 1\nkube-system                          4m35s       Warning   FailedScheduling             pod/l7-default-backend-678889f899-4q2t5                             no nodes available to schedule pods\nkube-system                          4m27s       Warning   FailedScheduling             pod/l7-default-backend-678889f899-4q2t5                             0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          4m17s       Warning   FailedScheduling             pod/l7-default-backend-678889f899-4q2t5                             0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m9s        Normal    Scheduled                    pod/l7-default-backend-678889f899-4q2t5                             Successfully assigned kube-system/l7-default-backend-678889f899-4q2t5 to bootstrap-e2e-minion-group-q10p\nkube-system                          4m          Normal    Pulling                      pod/l7-default-backend-678889f899-4q2t5                             Pulling image \"k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0\"\nkube-system                          3m59s       Normal    Pulled                       pod/l7-default-backend-678889f899-4q2t5                             Successfully pulled image \"k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0\"\nkube-system                          3m59s       Normal    Created                      pod/l7-default-backend-678889f899-4q2t5                             Created container default-http-backend\nkube-system                          3m51s       Normal    Started                      pod/l7-default-backend-678889f899-4q2t5                             Started container default-http-backend\nkube-system                          4m41s       Warning   FailedCreate                 replicaset/l7-default-backend-678889f899                            Error creating: pods \"l7-default-backend-678889f899-\" is forbidden: no providers available to validate pod request\nkube-system                          4m38s       Warning   FailedCreate                 replicaset/l7-default-backend-678889f899                            Error creating: pods \"l7-default-backend-678889f899-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          4m36s       Normal    SuccessfulCreate             replicaset/l7-default-backend-678889f899                            Created pod: l7-default-backend-678889f899-4q2t5\nkube-system                          4m41s       Normal    ScalingReplicaSet            deployment/l7-default-backend                                       Scaled up replica set l7-default-backend-678889f899 to 1\nkube-system                          4m33s       Normal    Created                      pod/l7-lb-controller-bootstrap-e2e-master                           Created container l7-lb-controller\nkube-system                          4m30s       Normal    Started                      pod/l7-lb-controller-bootstrap-e2e-master                           Started container l7-lb-controller\nkube-system                          4m34s       Normal    Pulled                       pod/l7-lb-controller-bootstrap-e2e-master                           Container image \"k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1\" already present on machine\nkube-system                          4m25s       Normal    Scheduled                    pod/metadata-proxy-v0.1-666fv                                       Successfully assigned kube-system/metadata-proxy-v0.1-666fv to bootstrap-e2e-minion-group-qn53\nkube-system                          4m24s       Normal    Pulling                      pod/metadata-proxy-v0.1-666fv                                       Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m23s       Normal    Pulled                       pod/metadata-proxy-v0.1-666fv                                       Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m23s       Normal    Created                      pod/metadata-proxy-v0.1-666fv                                       Created container metadata-proxy\nkube-system                          4m22s       Normal    Started                      pod/metadata-proxy-v0.1-666fv                                       Started container metadata-proxy\nkube-system                          4m22s       Normal    Pulling                      pod/metadata-proxy-v0.1-666fv                                       Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m20s       Normal    Pulled                       pod/metadata-proxy-v0.1-666fv                                       Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m18s       Normal    Created                      pod/metadata-proxy-v0.1-666fv                                       Created container prometheus-to-sd-exporter\nkube-system                          4m16s       Normal    Started                      pod/metadata-proxy-v0.1-666fv                                       Started container prometheus-to-sd-exporter\nkube-system                          4m25s       Normal    Scheduled                    pod/metadata-proxy-v0.1-9nsx7                                       Successfully assigned kube-system/metadata-proxy-v0.1-9nsx7 to bootstrap-e2e-minion-group-qkcq\nkube-system                          4m23s       Warning   FailedMount                  pod/metadata-proxy-v0.1-9nsx7                                       MountVolume.SetUp failed for volume \"metadata-proxy-token-mplx6\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          4m21s       Normal    Pulling                      pod/metadata-proxy-v0.1-9nsx7                                       Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m20s       Normal    Pulled                       pod/metadata-proxy-v0.1-9nsx7                                       Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m18s       Normal    Created                      pod/metadata-proxy-v0.1-9nsx7                                       Created container metadata-proxy\nkube-system                          4m17s       Normal    Started                      pod/metadata-proxy-v0.1-9nsx7                                       Started container metadata-proxy\nkube-system                          4m17s       Normal    Pulling                      pod/metadata-proxy-v0.1-9nsx7                                       Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m16s       Normal    Pulled                       pod/metadata-proxy-v0.1-9nsx7                                       Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m15s       Normal    Created                      pod/metadata-proxy-v0.1-9nsx7                                       Created container prometheus-to-sd-exporter\nkube-system                          4m13s       Normal    Started                      pod/metadata-proxy-v0.1-9nsx7                                       Started container prometheus-to-sd-exporter\nkube-system                          4m29s       Normal    Scheduled                    pod/metadata-proxy-v0.1-chbgg                                       Successfully assigned kube-system/metadata-proxy-v0.1-chbgg to bootstrap-e2e-master\nkube-system                          4m27s       Normal    Pulling                      pod/metadata-proxy-v0.1-chbgg                                       Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m26s       Normal    Pulled                       pod/metadata-proxy-v0.1-chbgg                                       Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m26s       Normal    Created                      pod/metadata-proxy-v0.1-chbgg                                       Created container metadata-proxy\nkube-system                          4m25s       Normal    Started                      pod/metadata-proxy-v0.1-chbgg                                       Started container metadata-proxy\nkube-system                          4m25s       Normal    Pulling                      pod/metadata-proxy-v0.1-chbgg                                       Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m23s       Normal    Pulled                       pod/metadata-proxy-v0.1-chbgg                                       Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m22s       Normal    Created                      pod/metadata-proxy-v0.1-chbgg                                       Created container prometheus-to-sd-exporter\nkube-system                          4m21s       Normal    Started                      pod/metadata-proxy-v0.1-chbgg                                       Started container prometheus-to-sd-exporter\nkube-system                          4m25s       Normal    Scheduled                    pod/metadata-proxy-v0.1-nkdb2                                       Successfully assigned kube-system/metadata-proxy-v0.1-nkdb2 to bootstrap-e2e-minion-group-q10p\nkube-system                          4m24s       Warning   FailedMount                  pod/metadata-proxy-v0.1-nkdb2                                       MountVolume.SetUp failed for volume \"metadata-proxy-token-mplx6\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          4m22s       Normal    Pulling                      pod/metadata-proxy-v0.1-nkdb2                                       Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m20s       Normal    Pulled                       pod/metadata-proxy-v0.1-nkdb2                                       Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m18s       Normal    Created                      pod/metadata-proxy-v0.1-nkdb2                                       Created container metadata-proxy\nkube-system                          4m17s       Normal    Started                      pod/metadata-proxy-v0.1-nkdb2                                       Started container metadata-proxy\nkube-system                          4m17s       Normal    Pulling                      pod/metadata-proxy-v0.1-nkdb2                                       Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m16s       Normal    Pulled                       pod/metadata-proxy-v0.1-nkdb2                                       Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m14s       Normal    Created                      pod/metadata-proxy-v0.1-nkdb2                                       Created container prometheus-to-sd-exporter\nkube-system                          4m12s       Normal    Started                      pod/metadata-proxy-v0.1-nkdb2                                       Started container prometheus-to-sd-exporter\nkube-system                          4m24s       Normal    Scheduled                    pod/metadata-proxy-v0.1-zt754                                       Successfully assigned kube-system/metadata-proxy-v0.1-zt754 to bootstrap-e2e-minion-group-vrtv\nkube-system                          4m23s       Warning   FailedMount                  pod/metadata-proxy-v0.1-zt754                                       MountVolume.SetUp failed for volume \"metadata-proxy-token-mplx6\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          4m20s       Normal    Pulling                      pod/metadata-proxy-v0.1-zt754                                       Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m18s       Normal    Pulled                       pod/metadata-proxy-v0.1-zt754                                       Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m17s       Normal    Created                      pod/metadata-proxy-v0.1-zt754                                       Created container metadata-proxy\nkube-system                          4m16s       Normal    Started                      pod/metadata-proxy-v0.1-zt754                                       Started container metadata-proxy\nkube-system                          4m16s       Normal    Pulling                      pod/metadata-proxy-v0.1-zt754                                       Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m15s       Normal    Pulled                       pod/metadata-proxy-v0.1-zt754                                       Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m14s       Normal    Created                      pod/metadata-proxy-v0.1-zt754                                       Created container prometheus-to-sd-exporter\nkube-system                          4m12s       Normal    Started                      pod/metadata-proxy-v0.1-zt754                                       Started container prometheus-to-sd-exporter\nkube-system                          4m30s       Normal    SuccessfulCreate             daemonset/metadata-proxy-v0.1                                       Created pod: metadata-proxy-v0.1-chbgg\nkube-system                          4m26s       Normal    SuccessfulCreate             daemonset/metadata-proxy-v0.1                                       Created pod: metadata-proxy-v0.1-666fv\nkube-system                          4m26s       Normal    SuccessfulCreate             daemonset/metadata-proxy-v0.1                                       Created pod: metadata-proxy-v0.1-nkdb2\nkube-system                          4m25s       Normal    SuccessfulCreate             daemonset/metadata-proxy-v0.1                                       Created pod: metadata-proxy-v0.1-9nsx7\nkube-system                          4m24s       Normal    SuccessfulCreate             daemonset/metadata-proxy-v0.1                                       Created pod: metadata-proxy-v0.1-zt754\nkube-system                          3m54s       Normal    Scheduled                    pod/metrics-server-v0.3.6-5f859c87d6-dtqxc                          Successfully assigned kube-system/metrics-server-v0.3.6-5f859c87d6-dtqxc to bootstrap-e2e-minion-group-qkcq\nkube-system                          3m53s       Normal    Pulling                      pod/metrics-server-v0.3.6-5f859c87d6-dtqxc                          Pulling image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          3m52s       Normal    Pulled                       pod/metrics-server-v0.3.6-5f859c87d6-dtqxc                          Successfully pulled image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          3m52s       Normal    Created                      pod/metrics-server-v0.3.6-5f859c87d6-dtqxc                          Created container metrics-server\nkube-system                          3m51s       Normal    Started                      pod/metrics-server-v0.3.6-5f859c87d6-dtqxc                          Started container metrics-server\nkube-system                          3m51s       Normal    Pulling                      pod/metrics-server-v0.3.6-5f859c87d6-dtqxc                          Pulling image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          3m50s       Normal    Pulled                       pod/metrics-server-v0.3.6-5f859c87d6-dtqxc                          Successfully pulled image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          3m50s       Normal    Created                      pod/metrics-server-v0.3.6-5f859c87d6-dtqxc                          Created container metrics-server-nanny\nkube-system                          3m49s       Normal    Started                      pod/metrics-server-v0.3.6-5f859c87d6-dtqxc                          Started container metrics-server-nanny\nkube-system                          3m54s       Normal    SuccessfulCreate             replicaset/metrics-server-v0.3.6-5f859c87d6                         Created pod: metrics-server-v0.3.6-5f859c87d6-dtqxc\nkube-system                          4m37s       Warning   FailedScheduling             pod/metrics-server-v0.3.6-65d4dc878-b8jf8                           no nodes available to schedule pods\nkube-system                          4m28s       Warning   FailedScheduling             pod/metrics-server-v0.3.6-65d4dc878-b8jf8                           0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          4m26s       Warning   FailedScheduling             pod/metrics-server-v0.3.6-65d4dc878-b8jf8                           0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m12s       Warning   FailedScheduling             pod/metrics-server-v0.3.6-65d4dc878-b8jf8                           0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m1s        Normal    Scheduled                    pod/metrics-server-v0.3.6-65d4dc878-b8jf8                           Successfully assigned kube-system/metrics-server-v0.3.6-65d4dc878-b8jf8 to bootstrap-e2e-minion-group-vrtv\nkube-system                          4m          Normal    Pulling                      pod/metrics-server-v0.3.6-65d4dc878-b8jf8                           Pulling image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          3m59s       Normal    Pulled                       pod/metrics-server-v0.3.6-65d4dc878-b8jf8                           Successfully pulled image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          3m58s       Normal    Created                      pod/metrics-server-v0.3.6-65d4dc878-b8jf8                           Created container metrics-server\nkube-system                          3m58s       Normal    Started                      pod/metrics-server-v0.3.6-65d4dc878-b8jf8                           Started container metrics-server\nkube-system                          3m58s       Normal    Pulling                      pod/metrics-server-v0.3.6-65d4dc878-b8jf8                           Pulling image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          3m55s       Normal    Pulled                       pod/metrics-server-v0.3.6-65d4dc878-b8jf8                           Successfully pulled image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          3m55s       Normal    Created                      pod/metrics-server-v0.3.6-65d4dc878-b8jf8                           Created container metrics-server-nanny\nkube-system                          3m54s       Normal    Started                      pod/metrics-server-v0.3.6-65d4dc878-b8jf8                           Started container metrics-server-nanny\nkube-system                          3m49s       Normal    Killing                      pod/metrics-server-v0.3.6-65d4dc878-b8jf8                           Stopping container metrics-server\nkube-system                          3m49s       Normal    Killing                      pod/metrics-server-v0.3.6-65d4dc878-b8jf8                           Stopping container metrics-server-nanny\nkube-system                          4m37s       Warning   FailedCreate                 replicaset/metrics-server-v0.3.6-65d4dc878                          Error creating: pods \"metrics-server-v0.3.6-65d4dc878-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          4m37s       Normal    SuccessfulCreate             replicaset/metrics-server-v0.3.6-65d4dc878                          Created pod: metrics-server-v0.3.6-65d4dc878-b8jf8\nkube-system                          3m49s       Normal    SuccessfulDelete             replicaset/metrics-server-v0.3.6-65d4dc878                          Deleted pod: metrics-server-v0.3.6-65d4dc878-b8jf8\nkube-system                          4m37s       Normal    ScalingReplicaSet            deployment/metrics-server-v0.3.6                                    Scaled up replica set metrics-server-v0.3.6-65d4dc878 to 1\nkube-system                          3m54s       Normal    ScalingReplicaSet            deployment/metrics-server-v0.3.6                                    Scaled up replica set metrics-server-v0.3.6-5f859c87d6 to 1\nkube-system                          3m49s       Normal    ScalingReplicaSet            deployment/metrics-server-v0.3.6                                    Scaled down replica set metrics-server-v0.3.6-65d4dc878 to 0\nkube-system                          4m34s       Warning   FailedScheduling             pod/volume-snapshot-controller-0                                    no nodes available to schedule pods\nkube-system                          4m27s       Warning   FailedScheduling             pod/volume-snapshot-controller-0                                    0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          4m16s       Warning   FailedScheduling             pod/volume-snapshot-controller-0                                    0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m8s        Normal    Scheduled                    pod/volume-snapshot-controller-0                                    Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-qn53\nkube-system                          4m7s        Normal    Pulling                      pod/volume-snapshot-controller-0                                    Pulling image \"quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\"\nkube-system                          4m4s        Normal    Pulled                       pod/volume-snapshot-controller-0                                    Successfully pulled image \"quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\"\nkube-system                          4m4s        Normal    Created                      pod/volume-snapshot-controller-0                                    Created container volume-snapshot-controller\nkube-system                          4m3s        Normal    Started                      pod/volume-snapshot-controller-0                                    Started container volume-snapshot-controller\nkube-system                          4m34s       Normal    SuccessfulCreate             statefulset/volume-snapshot-controller                              create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful\nkubectl-7630                         <unknown>                                                                                                              some data here\nkubectl-7630                         1s          Warning   ProvisioningFailed           persistentvolumeclaim/pvc1g5bncrtz7t                                Failed to provision volume with StorageClass \"standard\": claim.Spec.Selector is not supported for dynamic provisioning on GCE\nnettest-2543                         74s         Normal    Scheduled                    pod/netserver-0                                                     Successfully assigned nettest-2543/netserver-0 to bootstrap-e2e-minion-group-q10p\nnettest-2543                         72s         Normal    Pulling                      pod/netserver-0                                                     Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2543                         54s         Normal    Pulled                       pod/netserver-0                                                     Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2543                         53s         Normal    Created                      pod/netserver-0                                                     Created container webserver\nnettest-2543                         52s         Normal    Started                      pod/netserver-0                                                     Started container webserver\nnettest-2543                         74s         Normal    Scheduled                    pod/netserver-1                                                     Successfully assigned nettest-2543/netserver-1 to bootstrap-e2e-minion-group-qkcq\nnettest-2543                         71s         Normal    Pulling                      pod/netserver-1                                                     Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2543                         65s         Normal    Pulled                       pod/netserver-1                                                     Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2543                         65s         Normal    Created                      pod/netserver-1                                                     Created container webserver\nnettest-2543                         64s         Normal    Started                      pod/netserver-1                                                     Started container webserver\nnettest-2543                         74s         Normal    Scheduled                    pod/netserver-2                                                     Successfully assigned nettest-2543/netserver-2 to bootstrap-e2e-minion-group-qn53\nnettest-2543                         72s         Normal    Pulling                      pod/netserver-2                                                     Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2543                         58s         Normal    Pulled                       pod/netserver-2                                                     Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2543                         57s         Normal    Created                      pod/netserver-2                                                     Created container webserver\nnettest-2543                         56s         Normal    Started                      pod/netserver-2                                                     Started container webserver\nnettest-2543                         73s         Normal    Scheduled                    pod/netserver-3                                                     Successfully assigned nettest-2543/netserver-3 to bootstrap-e2e-minion-group-vrtv\nnettest-2543                         71s         Normal    Pulling                      pod/netserver-3                                                     Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2543                         63s         Normal    Pulled                       pod/netserver-3                                                     Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2543                         63s         Normal    Created                      pod/netserver-3                                                     Created container webserver\nnettest-2543                         62s         Normal    Started                      pod/netserver-3                                                     Started container webserver\nnettest-2543                         37s         Normal    Scheduled                    pod/test-container-pod                                              Successfully assigned nettest-2543/test-container-pod to bootstrap-e2e-minion-group-qn53\nnettest-2543                         34s         Normal    Pulled                       pod/test-container-pod                                              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-2543                         34s         Normal    Created                      pod/test-container-pod                                              Created container webserver\nnettest-2543                         33s         Normal    Started                      pod/test-container-pod                                              Started container webserver\npersistent-local-volumes-test-158    11s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-q10p-zs7w8                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-158    10s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-q10p-zs7w8                  Created container agnhost\npersistent-local-volumes-test-158    8s          Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-q10p-zs7w8                  Started container agnhost\npersistent-local-volumes-test-158    3s          Warning   ProvisioningFailed           persistentvolumeclaim/pvc-d6q5d                                     no volume plugin matched\npersistent-local-volumes-test-4682   5s          Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-q10p-j2kgb                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-4682   5s          Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-q10p-j2kgb                  Created container agnhost\npersistent-local-volumes-test-4682   3s          Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-q10p-j2kgb                  Started container agnhost\npersistent-local-volumes-test-4687   31s         Normal    Pulled                       pod/security-context-07e72838-a60b-40c5-bd1b-48b952dec4df           Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-4687   31s         Normal    Created                      pod/security-context-07e72838-a60b-40c5-bd1b-48b952dec4df           Created container write-pod\npersistent-local-volumes-test-4687   30s         Normal    Started                      pod/security-context-07e72838-a60b-40c5-bd1b-48b952dec4df           Started container write-pod\npersistent-local-volumes-test-4687   17s         Normal    Killing                      pod/security-context-07e72838-a60b-40c5-bd1b-48b952dec4df           Stopping container write-pod\npersistent-local-volumes-test-8451   67s         Normal    Pulling                      pod/hostexec-bootstrap-e2e-minion-group-q10p-prdbx                  Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\npersistent-local-volumes-test-8451   54s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-q10p-prdbx                  Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\npersistent-local-volumes-test-8451   52s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-q10p-prdbx                  Created container agnhost\npersistent-local-volumes-test-8451   50s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-q10p-prdbx                  Started container agnhost\npersistent-local-volumes-test-8451   19s         Normal    Scheduled                    pod/security-context-81c57741-b951-488f-985a-204e150ae56e           Successfully assigned persistent-local-volumes-test-8451/security-context-81c57741-b951-488f-985a-204e150ae56e to bootstrap-e2e-minion-group-q10p\npersistent-local-volumes-test-8451   15s         Normal    Pulled                       pod/security-context-81c57741-b951-488f-985a-204e150ae56e           Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-8451   15s         Normal    Created                      pod/security-context-81c57741-b951-488f-985a-204e150ae56e           Created container write-pod\npersistent-local-volumes-test-8451   13s         Normal    Started                      pod/security-context-81c57741-b951-488f-985a-204e150ae56e           Started container write-pod\npersistent-local-volumes-test-8451   1s          Normal    Killing                      pod/security-context-81c57741-b951-488f-985a-204e150ae56e           Stopping container write-pod\npersistent-local-volumes-test-8451   37s         Normal    Scheduled                    pod/security-context-f4c73856-24ab-45a9-a0d4-2ea953bd87c2           Successfully assigned persistent-local-volumes-test-8451/security-context-f4c73856-24ab-45a9-a0d4-2ea953bd87c2 to bootstrap-e2e-minion-group-q10p\npersistent-local-volumes-test-8451   31s         Normal    Pulled                       pod/security-context-f4c73856-24ab-45a9-a0d4-2ea953bd87c2           Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-8451   31s         Normal    Created                      pod/security-context-f4c73856-24ab-45a9-a0d4-2ea953bd87c2           Created container write-pod\npersistent-local-volumes-test-8451   29s         Normal    Started                      pod/security-context-f4c73856-24ab-45a9-a0d4-2ea953bd87c2           Started container write-pod\npersistent-local-volumes-test-8451   1s          Normal    Killing                      pod/security-context-f4c73856-24ab-45a9-a0d4-2ea953bd87c2           Stopping container write-pod\nprojected-1075                       5s          Normal    Scheduled                    pod/pod-projected-configmaps-e463f003-3151-4450-b727-0150d57cff81   Successfully assigned projected-1075/pod-projected-configmaps-e463f003-3151-4450-b727-0150d57cff81 to bootstrap-e2e-minion-group-qn53\nprojected-1075                       4s          Normal    Pulled                       pod/pod-projected-configmaps-e463f003-3151-4450-b727-0150d57cff81   Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-1075                       4s          Normal    Created                      pod/pod-projected-configmaps-e463f003-3151-4450-b727-0150d57cff81   Created container projected-configmap-volume-test\nprojected-1075                       4s          Normal    Started                      pod/pod-projected-configmaps-e463f003-3151-4450-b727-0150d57cff81   Started container projected-configmap-volume-test\nprojected-5454                       2s          Normal    Scheduled                    pod/labelsupdatec42af265-6ecc-4902-9990-c4de108151c2                Successfully assigned projected-5454/labelsupdatec42af265-6ecc-4902-9990-c4de108151c2 to bootstrap-e2e-minion-group-vrtv\nprovisioning-2262                    33s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-vrtv-mc4r8                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-2262                    33s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-vrtv-mc4r8                  Created container agnhost\nprovisioning-2262                    32s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-vrtv-mc4r8                  Started container agnhost\nprovisioning-2262                    7s          Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-4s9x                          Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-2262                    7s          Normal    Created                      pod/pod-subpath-test-preprovisionedpv-4s9x                          Created container init-volume-preprovisionedpv-4s9x\nprovisioning-2262                    6s          Normal    Started                      pod/pod-subpath-test-preprovisionedpv-4s9x                          Started container init-volume-preprovisionedpv-4s9x\nprovisioning-2262                    5s          Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-4s9x                          Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2262                    5s          Normal    Created                      pod/pod-subpath-test-preprovisionedpv-4s9x                          Created container test-container-subpath-preprovisionedpv-4s9x\nprovisioning-2262                    4s          Normal    Started                      pod/pod-subpath-test-preprovisionedpv-4s9x                          Started container test-container-subpath-preprovisionedpv-4s9x\nprovisioning-2262                    23s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-7lsql                                     storageclass.storage.k8s.io \"provisioning-2262\" not found\nprovisioning-2650                    29s         Normal    LeaderElection               endpoints/example.com-nfs-provisioning-2650                         external-provisioner-x4tn6_7940beaa-2e1e-4341-8032-afd54be9edc8 became leader\nprovisioning-2650                    73s         Normal    Scheduled                    pod/external-provisioner-x4tn6                                      Successfully assigned provisioning-2650/external-provisioner-x4tn6 to bootstrap-e2e-minion-group-qn53\nprovisioning-2650                    71s         Normal    Pulling                      pod/external-provisioner-x4tn6                                      Pulling image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\"\nprovisioning-2650                    37s         Normal    Pulled                       pod/external-provisioner-x4tn6                                      Successfully pulled image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\"\nprovisioning-2650                    37s         Normal    Created                      pod/external-provisioner-x4tn6                                      Created container nfs-provisioner\nprovisioning-2650                    36s         Normal    Started                      pod/external-provisioner-x4tn6                                      Started container nfs-provisioner\nprovisioning-2650                    11s         Normal    Killing                      pod/external-provisioner-x4tn6                                      Stopping container nfs-provisioner\nprovisioning-2650                    29s         Normal    ExternalProvisioning         persistentvolumeclaim/nfsmc7pz                                      waiting for a volume to be created, either by external provisioner \"example.com/nfs-provisioning-2650\" or manually created by system administrator\nprovisioning-2650                    28s         Normal    Provisioning                 persistentvolumeclaim/nfsmc7pz                                      External provisioner is provisioning volume for claim \"provisioning-2650/nfsmc7pz\"\nprovisioning-2650                    28s         Normal    ProvisioningSucceeded        persistentvolumeclaim/nfsmc7pz                                      Successfully provisioned volume pvc-a8a9c037-e3c5-4080-b638-ecf76e7cf099\nprovisioning-2650                    26s         Normal    Scheduled                    pod/pod-subpath-test-dynamicpv-9gtz                                 Successfully assigned provisioning-2650/pod-subpath-test-dynamicpv-9gtz to bootstrap-e2e-minion-group-qn53\nprovisioning-2650                    22s         Normal    Pulled                       pod/pod-subpath-test-dynamicpv-9gtz                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-2650                    22s         Normal    Created                      pod/pod-subpath-test-dynamicpv-9gtz                                 Created container init-volume-dynamicpv-9gtz\nprovisioning-2650                    22s         Normal    Started                      pod/pod-subpath-test-dynamicpv-9gtz                                 Started container init-volume-dynamicpv-9gtz\nprovisioning-2650                    22s         Normal    Pulled                       pod/pod-subpath-test-dynamicpv-9gtz                                 Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2650                    22s         Normal    Created                      pod/pod-subpath-test-dynamicpv-9gtz                                 Created container test-init-volume-dynamicpv-9gtz\nprovisioning-2650                    21s         Normal    Started                      pod/pod-subpath-test-dynamicpv-9gtz                                 Started container test-init-volume-dynamicpv-9gtz\nprovisioning-2650                    21s         Normal    Pulled                       pod/pod-subpath-test-dynamicpv-9gtz                                 Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2650                    20s         Normal    Created                      pod/pod-subpath-test-dynamicpv-9gtz                                 Created container test-container-subpath-dynamicpv-9gtz\nprovisioning-2650                    20s         Normal    Started                      pod/pod-subpath-test-dynamicpv-9gtz                                 Started container test-container-subpath-dynamicpv-9gtz\nprovisioning-4978                    68s         Normal    Pulling                      pod/hostexec-bootstrap-e2e-minion-group-q10p-h5xdl                  Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nprovisioning-4978                    54s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-q10p-h5xdl                  Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nprovisioning-4978                    52s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-q10p-h5xdl                  Created container agnhost\nprovisioning-4978                    51s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-q10p-h5xdl                  Started container agnhost\nprovisioning-4978                    18s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-tlms                          Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-4978                    17s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-tlms                          Created container init-volume-preprovisionedpv-tlms\nprovisioning-4978                    15s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-tlms                          Started container init-volume-preprovisionedpv-tlms\nprovisioning-4978                    13s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-tlms                          Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4978                    13s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-tlms                          Created container test-init-subpath-preprovisionedpv-tlms\nprovisioning-4978                    10s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-tlms                          Started container test-init-subpath-preprovisionedpv-tlms\nprovisioning-4978                    7s          Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-tlms                          Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4978                    7s          Normal    Created                      pod/pod-subpath-test-preprovisionedpv-tlms                          Created container test-container-subpath-preprovisionedpv-tlms\nprovisioning-4978                    5s          Normal    Started                      pod/pod-subpath-test-preprovisionedpv-tlms                          Started container test-container-subpath-preprovisionedpv-tlms\nprovisioning-4978                    34s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-fs6ml                                     storageclass.storage.k8s.io \"provisioning-4978\" not found\nprovisioning-4982                    21s         Normal    Pulled                       pod/pod-subpath-test-inlinevolume-bbg9                              Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4982                    21s         Normal    Created                      pod/pod-subpath-test-inlinevolume-bbg9                              Created container test-container-subpath-inlinevolume-bbg9\nprovisioning-4982                    21s         Normal    Started                      pod/pod-subpath-test-inlinevolume-bbg9                              Started container test-container-subpath-inlinevolume-bbg9\nprovisioning-4982                    21s         Normal    Pulled                       pod/pod-subpath-test-inlinevolume-bbg9                              Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-4982                    21s         Normal    Created                      pod/pod-subpath-test-inlinevolume-bbg9                              Created container test-container-volume-inlinevolume-bbg9\nprovisioning-4982                    20s         Normal    Started                      pod/pod-subpath-test-inlinevolume-bbg9                              Started container test-container-volume-inlinevolume-bbg9\nprovisioning-4982                    15s         Normal    Killing                      pod/pod-subpath-test-inlinevolume-bbg9                              Stopping container test-container-volume-inlinevolume-bbg9\nprovisioning-7841                    22s         Normal    Scheduled                    pod/pod-subpath-test-inlinevolume-4csd                              Successfully assigned provisioning-7841/pod-subpath-test-inlinevolume-4csd to bootstrap-e2e-minion-group-vrtv\nprovisioning-7841                    20s         Normal    Pulled                       pod/pod-subpath-test-inlinevolume-4csd                              Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-7841                    19s         Normal    Created                      pod/pod-subpath-test-inlinevolume-4csd                              Created container init-volume-inlinevolume-4csd\nprovisioning-7841                    19s         Normal    Started                      pod/pod-subpath-test-inlinevolume-4csd                              Started container init-volume-inlinevolume-4csd\nprovisioning-7841                    17s         Normal    Pulled                       pod/pod-subpath-test-inlinevolume-4csd                              Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-7841                    17s         Normal    Created                      pod/pod-subpath-test-inlinevolume-4csd                              Created container test-container-subpath-inlinevolume-4csd\nprovisioning-7841                    16s         Normal    Started                      pod/pod-subpath-test-inlinevolume-4csd                              Started container test-container-subpath-inlinevolume-4csd\nprovisioning-8413                    15s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-qkcq-ch4n5                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-8413                    15s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-qkcq-ch4n5                  Created container agnhost\nprovisioning-8413                    15s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-qkcq-ch4n5                  Started container agnhost\nprovisioning-8413                    13s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-7z74k                                     storageclass.storage.k8s.io \"provisioning-8413\" not found\nprovisioning-8742                    3s          Normal    Scheduled                    pod/gluster-server                                                  Successfully assigned provisioning-8742/gluster-server to bootstrap-e2e-minion-group-vrtv\nprovisioning-990                     33s         Normal    WaitForFirstConsumer         persistentvolumeclaim/pvc-h6d94                                     waiting for first consumer to be created before binding\nprovisioning-990                     30s         Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-h6d94                                     Successfully provisioned volume pvc-20a925ae-ee6d-444e-92c2-27a20a1a8194 using kubernetes.io/gce-pd\nprovisioning-990                     28s         Normal    Scheduled                    pod/pvc-volume-tester-writer-h7pcl                                  Successfully assigned provisioning-990/pvc-volume-tester-writer-h7pcl to bootstrap-e2e-minion-group-vrtv\nprovisioning-990                     28s         Warning   FailedMount                  pod/pvc-volume-tester-writer-h7pcl                                  Unable to attach or mount volumes: unmounted volumes=[my-volume default-token-ql44h], unattached volumes=[my-volume default-token-ql44h]: error processing PVC provisioning-990/pvc-h6d94: failed to fetch PVC from API server: persistentvolumeclaims \"pvc-h6d94\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-vrtv\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"provisioning-990\": no relationship found between node \"bootstrap-e2e-minion-group-vrtv\" and this object\nprovisioning-990                     27s         Warning   FailedMount                  pod/pvc-volume-tester-writer-h7pcl                                  MountVolume.SetUp failed for volume \"default-token-ql44h\" : failed to sync secret cache: timed out waiting for the condition\nprovisioning-990                     21s         Normal    SuccessfulAttachVolume       pod/pvc-volume-tester-writer-h7pcl                                  AttachVolume.Attach succeeded for volume \"pvc-20a925ae-ee6d-444e-92c2-27a20a1a8194\"\nprovisioning-990                     15s         Normal    Pulled                       pod/pvc-volume-tester-writer-h7pcl                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-990                     14s         Normal    Created                      pod/pvc-volume-tester-writer-h7pcl                                  Created container volume-tester\nprovisioning-990                     13s         Normal    Started                      pod/pvc-volume-tester-writer-h7pcl                                  Started container volume-tester\npv-4462                              10s         Normal    Scheduled                    pod/nfs-server                                                      Successfully assigned pv-4462/nfs-server to bootstrap-e2e-minion-group-vrtv\npv-4462                              7s          Normal    Pulling                      pod/nfs-server                                                      Pulling image \"gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0\"\nservices-971                         19s         Normal    Scheduled                    pod/execpod-c5gsk                                                   Successfully assigned services-971/execpod-c5gsk to bootstrap-e2e-minion-group-vrtv\nservices-971                         17s         Normal    Pulled                       pod/execpod-c5gsk                                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-971                         17s         Normal    Created                      pod/execpod-c5gsk                                                   Created container agnhost-pause\nservices-971                         16s         Normal    Started                      pod/execpod-c5gsk                                                   Started container agnhost-pause\nservices-971                         7s          Normal    Killing                      pod/execpod-c5gsk                                                   Stopping container agnhost-pause\nservices-971                         35s         Normal    Scheduled                    pod/execpod-rsp4x                                                   Successfully assigned services-971/execpod-rsp4x to bootstrap-e2e-minion-group-qkcq\nservices-971                         34s         Normal    Pulled                       pod/execpod-rsp4x                                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-971                         34s         Normal    Created                      pod/execpod-rsp4x                                                   Created container agnhost-pause\nservices-971                         34s         Normal    Started                      pod/execpod-rsp4x                                                   Started container agnhost-pause\nservices-971                         25s         Normal    Killing                      pod/execpod-rsp4x                                                   Stopping container agnhost-pause\nservices-971                         73s         Normal    Scheduled                    pod/service-proxy-disabled-flvrl                                    Successfully assigned services-971/service-proxy-disabled-flvrl to bootstrap-e2e-minion-group-q10p\nservices-971                         66s         Normal    Pulling                      pod/service-proxy-disabled-flvrl                                    Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nservices-971                         54s         Normal    Pulled                       pod/service-proxy-disabled-flvrl                                    Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nservices-971                         52s         Normal    Created                      pod/service-proxy-disabled-flvrl                                    Created container service-proxy-disabled\nservices-971                         50s         Normal    Started                      pod/service-proxy-disabled-flvrl                                    Started container service-proxy-disabled\nservices-971                         73s         Normal    Scheduled                    pod/service-proxy-disabled-mblhb                                    Successfully assigned services-971/service-proxy-disabled-mblhb to bootstrap-e2e-minion-group-qn53\nservices-971                         72s         Warning   FailedMount                  pod/service-proxy-disabled-mblhb                                    MountVolume.SetUp failed for volume \"default-token-h6klg\" : failed to sync secret cache: timed out waiting for the condition\nservices-971                         69s         Normal    Pulling                      pod/service-proxy-disabled-mblhb                                    Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nservices-971                         58s         Normal    Pulled                       pod/service-proxy-disabled-mblhb                                    Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nservices-971                         58s         Normal    Created                      pod/service-proxy-disabled-mblhb                                    Created container service-proxy-disabled\nservices-971                         57s         Normal    Started                      pod/service-proxy-disabled-mblhb                                    Started container service-proxy-disabled\nservices-971                         73s         Normal    Scheduled                    pod/service-proxy-disabled-s97tt                                    Successfully assigned services-971/service-proxy-disabled-s97tt to bootstrap-e2e-minion-group-vrtv\nservices-971                         71s         Normal    Pulling                      pod/service-proxy-disabled-s97tt                                    Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nservices-971                         63s         Normal    Pulled                       pod/service-proxy-disabled-s97tt                                    Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nservices-971                         63s         Normal    Created                      pod/service-proxy-disabled-s97tt                                    Created container service-proxy-disabled\nservices-971                         62s         Normal    Started                      pod/service-proxy-disabled-s97tt                                    Started container service-proxy-disabled\nservices-971                         73s         Normal    SuccessfulCreate             replicationcontroller/service-proxy-disabled                        Created pod: service-proxy-disabled-mblhb\nservices-971                         73s         Normal    SuccessfulCreate             replicationcontroller/service-proxy-disabled                        Created pod: service-proxy-disabled-flvrl\nservices-971                         73s         Normal    SuccessfulCreate             replicationcontroller/service-proxy-disabled                        Created pod: service-proxy-disabled-s97tt\nservices-971                         45s         Normal    Scheduled                    pod/service-proxy-toggled-gvk2f                                     Successfully assigned services-971/service-proxy-toggled-gvk2f to bootstrap-e2e-minion-group-vrtv\nservices-971                         44s         Normal    Pulled                       pod/service-proxy-toggled-gvk2f                                     Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-971                         43s         Normal    Created                      pod/service-proxy-toggled-gvk2f                                     Created container service-proxy-toggled\nservices-971                         42s         Normal    Started                      pod/service-proxy-toggled-gvk2f                                     Started container service-proxy-toggled\nservices-971                         45s         Normal    Scheduled                    pod/service-proxy-toggled-rhvz4                                     Successfully assigned services-971/service-proxy-toggled-rhvz4 to bootstrap-e2e-minion-group-qkcq\nservices-971                         44s         Normal    Pulled                       pod/service-proxy-toggled-rhvz4                                     Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-971                         44s         Normal    Created                      pod/service-proxy-toggled-rhvz4                                     Created container service-proxy-toggled\nservices-971                         43s         Normal    Started                      pod/service-proxy-toggled-rhvz4                                     Started container service-proxy-toggled\nservices-971                         45s         Normal    Scheduled                    pod/service-proxy-toggled-zh586                                     Successfully assigned services-971/service-proxy-toggled-zh586 to bootstrap-e2e-minion-group-qn53\nservices-971                         41s         Normal    Pulled                       pod/service-proxy-toggled-zh586                                     Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-971                         41s         Normal    Created                      pod/service-proxy-toggled-zh586                                     Created container service-proxy-toggled\nservices-971                         40s         Normal    Started                      pod/service-proxy-toggled-zh586                                     Started container service-proxy-toggled\nservices-971                         45s         Normal    SuccessfulCreate             replicationcontroller/service-proxy-toggled                         Created pod: service-proxy-toggled-gvk2f\nservices-971                         45s         Normal    SuccessfulCreate             replicationcontroller/service-proxy-toggled                         Created pod: service-proxy-toggled-zh586\nservices-971                         45s         Normal    SuccessfulCreate             replicationcontroller/service-proxy-toggled                         Created pod: service-proxy-toggled-rhvz4\nstatefulset-1934                     73s         Normal    ProvisioningSucceeded        persistentvolumeclaim/datadir-ss-0                                  Successfully provisioned volume pvc-dac088e4-3119-4f37-81bf-772a249f0806 using kubernetes.io/gce-pd\nstatefulset-1934                     75s         Warning   FailedScheduling             pod/ss-0                                                            running \"VolumeBinding\" filter plugin for pod \"ss-0\": pod has unbound immediate PersistentVolumeClaims\nstatefulset-1934                     72s         Normal    Scheduled                    pod/ss-0                                                            Successfully assigned statefulset-1934/ss-0 to bootstrap-e2e-minion-group-vrtv\nstatefulset-1934                     64s         Normal    SuccessfulAttachVolume       pod/ss-0                                                            AttachVolume.Attach succeeded for volume \"pvc-dac088e4-3119-4f37-81bf-772a249f0806\"\nstatefulset-1934                     51s         Normal    Pulled                       pod/ss-0                                                            Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-1934                     51s         Normal    Created                      pod/ss-0                                                            Created container webserver\nstatefulset-1934                     51s         Normal    Started                      pod/ss-0                                                            Started container webserver\nstatefulset-1934                     46s         Warning   Unhealthy                    pod/ss-0                                                            Readiness probe failed:\nstatefulset-1934                     35s         Normal    Killing                      pod/ss-0                                                            Stopping container webserver\nstatefulset-1934                     33s         Warning   Unhealthy                    pod/ss-0                                                            Readiness probe failed: cannot exec in a stopped state: unknown\nstatefulset-1934                     77s         Normal    SuccessfulCreate             statefulset/ss                                                      create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\nstatefulset-1934                     76s         Normal    SuccessfulCreate             statefulset/ss                                                      create Pod ss-0 in StatefulSet ss successful\nstatefulset-1934                     39s         Warning   FailedCreate                 statefulset/ss                                                      create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\nstatefulset-1934                     35s         Normal    SuccessfulDelete             statefulset/ss                                                      delete Pod ss-0 in StatefulSet ss successful\nsysctl-2620                          1s          Normal    Scheduled                    pod/sysctl-d11cfdbf-401e-40ad-94a3-17fbb042a280                     Successfully assigned sysctl-2620/sysctl-d11cfdbf-401e-40ad-94a3-17fbb042a280 to bootstrap-e2e-minion-group-vrtv\nvolume-2246                          39s         Normal    LeaderElection               endpoints/example.com-nfs-volume-2246                               external-provisioner-7z8jj_f8998740-ead7-4853-b237-93f06713ea12 became leader\nvolume-2246                          36s         Normal    Scheduled                    pod/exec-volume-test-dynamicpv-48rs                                 Successfully assigned volume-2246/exec-volume-test-dynamicpv-48rs to bootstrap-e2e-minion-group-qn53\nvolume-2246                          32s         Normal    Pulling                      pod/exec-volume-test-dynamicpv-48rs                                 Pulling image \"docker.io/library/nginx:1.14-alpine\"\nvolume-2246                          28s         Normal    Pulled                       pod/exec-volume-test-dynamicpv-48rs                                 Successfully pulled image \"docker.io/library/nginx:1.14-alpine\"\nvolume-2246                          28s         Normal    Created                      pod/exec-volume-test-dynamicpv-48rs                                 Created container exec-container-dynamicpv-48rs\nvolume-2246                          28s         Normal    Started                      pod/exec-volume-test-dynamicpv-48rs                                 Started container exec-container-dynamicpv-48rs\nvolume-2246                          75s         Normal    Scheduled                    pod/external-provisioner-7z8jj                                      Successfully assigned volume-2246/external-provisioner-7z8jj to bootstrap-e2e-minion-group-qkcq\nvolume-2246                          73s         Normal    Pulling                      pod/external-provisioner-7z8jj                                      Pulling image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\"\nvolume-2246                          46s         Normal    Pulled                       pod/external-provisioner-7z8jj                                      Successfully pulled image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\"\nvolume-2246                          45s         Normal    Created                      pod/external-provisioner-7z8jj                                      Created container nfs-provisioner\nvolume-2246                          45s         Normal    Started                      pod/external-provisioner-7z8jj                                      Started container nfs-provisioner\nvolume-2246                          20s         Normal    Killing                      pod/external-provisioner-7z8jj                                      Stopping container nfs-provisioner\nvolume-2246                          38s         Normal    ExternalProvisioning         persistentvolumeclaim/nfsw5mvl                                      waiting for a volume to be created, either by external provisioner \"example.com/nfs-volume-2246\" or manually created by system administrator\nvolume-2246                          38s         Normal    Provisioning                 persistentvolumeclaim/nfsw5mvl                                      External provisioner is provisioning volume for claim \"volume-2246/nfsw5mvl\"\nvolume-2246                          38s         Normal    ProvisioningSucceeded        persistentvolumeclaim/nfsw5mvl                                      Successfully provisioned volume pvc-e9beffc0-96a0-4b59-968f-5dd85e3192db\nvolume-3891                          28s         Normal    LeaderElection               endpoints/example.com-nfs-volume-3891                               external-provisioner-p5xdr_80ab2dc6-3766-41ed-af82-bea706e12bd0 became leader\nvolume-3891                          39s         Normal    Scheduled                    pod/external-provisioner-p5xdr                                      Successfully assigned volume-3891/external-provisioner-p5xdr to bootstrap-e2e-minion-group-qn53\nvolume-3891                          35s         Normal    Pulled                       pod/external-provisioner-p5xdr                                      Container image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\" already present on machine\nvolume-3891                          35s         Normal    Created                      pod/external-provisioner-p5xdr                                      Created container nfs-provisioner\nvolume-3891                          34s         Normal    Started                      pod/external-provisioner-p5xdr                                      Started container nfs-provisioner\nvolume-3891                          28s         Normal    Scheduled                    pod/nfs-server                                                      Successfully assigned volume-3891/nfs-server to bootstrap-e2e-minion-group-qkcq\nvolume-3891                          27s         Normal    Pulling                      pod/nfs-server                                                      Pulling image \"gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0\"\nvolume-3891                          14s         Normal    Pulled                       pod/nfs-server                                                      Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0\"\nvolume-3891                          13s         Normal    Created                      pod/nfs-server                                                      Created container nfs-server\nvolume-3891                          13s         Normal    Started                      pod/nfs-server                                                      Started container nfs-server\nvolume-3891                          12s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-wzdbt                                     storageclass.storage.k8s.io \"volume-3891\" not found\nvolume-4652                          10s         Normal    Scheduled                    pod/gcepd-client                                                    Successfully assigned volume-4652/gcepd-client to bootstrap-e2e-minion-group-qn53\nvolume-4652                          38s         Normal    Scheduled                    pod/gcepd-injector                                                  Successfully assigned volume-4652/gcepd-injector to bootstrap-e2e-minion-group-qn53\nvolume-4652                          31s         Normal    SuccessfulAttachVolume       pod/gcepd-injector                                                  AttachVolume.Attach succeeded for volume \"gcepd-fzctn\"\nvolume-4652                          24s         Normal    Pulled                       pod/gcepd-injector                                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-4652                          24s         Normal    Created                      pod/gcepd-injector                                                  Created container gcepd-injector\nvolume-4652                          23s         Normal    Started                      pod/gcepd-injector                                                  Started container gcepd-injector\nvolume-4652                          16s         Normal    Killing                      pod/gcepd-injector                                                  Stopping container gcepd-injector\nvolume-4652                          51s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-sn694                                     storageclass.storage.k8s.io \"volume-4652\" not found\nvolume-5786                          12s         Normal    Scheduled                    pod/gcepd-injector                                                  Successfully assigned volume-5786/gcepd-injector to bootstrap-e2e-minion-group-vrtv\nvolume-5786                          5s          Normal    SuccessfulAttachVolume       pod/gcepd-injector                                                  AttachVolume.Attach succeeded for volume \"gcepd-qvrsx\"\nvolume-5786                          23s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-pqlsx                                     storageclass.storage.k8s.io \"volume-5786\" not found\nvolume-7834                          21s         Normal    Scheduled                    pod/gcepd-client                                                    Successfully assigned volume-7834/gcepd-client to bootstrap-e2e-minion-group-qkcq\nvolume-7834                          9s          Normal    SuccessfulAttachVolume       pod/gcepd-client                                                    AttachVolume.Attach succeeded for volume \"gcepd-kxjk5\"\nvolume-7834                          3s          Normal    Pulled                       pod/gcepd-client                                                    Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-7834                          3s          Normal    Created                      pod/gcepd-client                                                    Created container gcepd-client\nvolume-7834                          3s          Normal    Started                      pod/gcepd-client                                                    Started container gcepd-client\nvolume-7834                          56s         Normal    Scheduled                    pod/gcepd-injector                                                  Successfully assigned volume-7834/gcepd-injector to bootstrap-e2e-minion-group-qn53\nvolume-7834                          49s         Normal    SuccessfulAttachVolume       pod/gcepd-injector                                                  AttachVolume.Attach succeeded for volume \"gcepd-kxjk5\"\nvolume-7834                          42s         Normal    Pulled                       pod/gcepd-injector                                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-7834                          41s         Normal    Created                      pod/gcepd-injector                                                  Created container gcepd-injector\nvolume-7834                          41s         Normal    Started                      pod/gcepd-injector                                                  Started container gcepd-injector\nvolume-7834                          30s         Normal    Killing                      pod/gcepd-injector                                                  Stopping container gcepd-injector\nvolume-7834                          68s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-42fr5                                     storageclass.storage.k8s.io \"volume-7834\" not found\nvolume-8973                          6s          Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-qn53-9nw9z                  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-8973                          6s          Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-qn53-9nw9z                  Created container agnhost\nvolume-8973                          6s          Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-qn53-9nw9z                  Started container agnhost\n"
Jan 15 16:15:56.166: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.107.52 --kubeconfig=/workspace/.kube/config get pods --all-namespaces'
Jan 15 16:15:56.512: INFO: stderr: ""
Jan 15 16:15:56.512: INFO: stdout: "NAMESPACE                            NAME                                                    READY   STATUS                  RESTARTS   AGE\napparmor-1629                        apparmor-loader-r5w6q                                   1/1     Terminating             0          40s\ncontainer-probe-8849                 test-webserver-bccfe720-9fca-4150-8386-32304c5297d7     1/1     Running                 0          35s\ncsi-mock-volumes-187                 csi-mockplugin-0                                        3/3     Running                 0          66s\ncsi-mock-volumes-187                 csi-mockplugin-attacher-0                               1/1     Running                 0          66s\ncsi-mock-volumes-4687                csi-mockplugin-0                                        3/3     Running                 0          65s\ncsi-mock-volumes-4687                csi-mockplugin-attacher-0                               1/1     Running                 0          65s\ncsi-mock-volumes-4687                csi-mockplugin-resizer-0                                1/1     Running                 0          65s\ncsi-mock-volumes-948                 csi-mockplugin-0                                        3/3     Running                 0          68s\ncsi-mock-volumes-948                 csi-mockplugin-resizer-0                                1/1     Running                 0          67s\ncsi-mock-volumes-948                 pvc-volume-tester-w6rhc                                 1/1     Running                 0          44s\nephemeral-1794                       csi-hostpath-attacher-0                                 1/1     Running                 0          64s\nephemeral-1794                       csi-hostpath-provisioner-0                              1/1     Running                 0          65s\nephemeral-1794                       csi-hostpath-resizer-0                                  1/1     Running                 0          65s\nephemeral-1794                       csi-hostpathplugin-0                                    3/3     Running                 0          66s\nephemeral-1794                       csi-snapshotter-0                                       1/1     Running                 0          65s\nephemeral-1794                       inline-volume-tester-6f9st                              1/1     Terminating             0          66s\nephemeral-4116                       csi-hostpath-attacher-0                                 1/1     Running                 0          72s\nephemeral-4116                       csi-hostpath-provisioner-0                              1/1     Running                 0          72s\nephemeral-4116                       csi-hostpath-resizer-0                                  1/1     Running                 0          72s\nephemeral-4116                       csi-hostpathplugin-0                                    3/3     Running                 0          73s\nephemeral-4116                       csi-snapshotter-0                                       1/1     Running                 0          72s\nephemeral-4116                       inline-volume-tester-s57ft                              1/1     Terminating             0          73s\ninit-container-7415                  pod-init-a6d6f31a-10cf-490f-9efd-27a91145d0ed           0/1     Init:CrashLoopBackOff   1          17s\nkube-system                          coredns-65567c7b57-kdfdw                                1/1     Running                 0          4m37s\nkube-system                          coredns-65567c7b57-n7vgj                                1/1     Running                 0          4m3s\nkube-system                          etcd-empty-dir-cleanup-bootstrap-e2e-master             1/1     Running                 0          4m10s\nkube-system                          etcd-server-bootstrap-e2e-master                        1/1     Running                 0          3m46s\nkube-system                          etcd-server-events-bootstrap-e2e-master                 1/1     Running                 0          3m50s\nkube-system                          event-exporter-v0.3.1-747b47fcd-js4fh                   2/2     Running                 0          4m42s\nkube-system                          fluentd-gcp-scaler-76d9c77b4d-sk5bz                     1/1     Running                 0          4m36s\nkube-system                          fluentd-gcp-v3.2.0-6tspz                                2/2     Running                 0          3m18s\nkube-system                          fluentd-gcp-v3.2.0-mxnmk                                2/2     Running                 0          3m37s\nkube-system                          fluentd-gcp-v3.2.0-t6mk4                                2/2     Running                 0          2m57s\nkube-system                          fluentd-gcp-v3.2.0-vqmcb                                2/2     Running                 0          3m51s\nkube-system                          fluentd-gcp-v3.2.0-zcg6h                                2/2     Running                 0          3m27s\nkube-system                          kube-addon-manager-bootstrap-e2e-master                 1/1     Running                 0          3m34s\nkube-system                          kube-apiserver-bootstrap-e2e-master                     1/1     Running                 0          4m25s\nkube-system                          kube-controller-manager-bootstrap-e2e-master            1/1     Running                 0          4m25s\nkube-system                          kube-dns-autoscaler-65bc6d4889-c4f5l                    1/1     Running                 0          33s\nkube-system                          kube-proxy-bootstrap-e2e-minion-group-q10p              1/1     Running                 0          4m27s\nkube-system                          kube-proxy-bootstrap-e2e-minion-group-qkcq              1/1     Running                 0          4m26s\nkube-system                          kube-proxy-bootstrap-e2e-minion-group-qn53              1/1     Running                 0          4m27s\nkube-system                          kube-proxy-bootstrap-e2e-minion-group-vrtv              1/1     Running                 0          4m26s\nkube-system                          kube-scheduler-bootstrap-e2e-master                     1/1     Running                 0          4m7s\nkube-system                          kubernetes-dashboard-7778f8b456-wjltm                   1/1     Running                 0          4m36s\nkube-system                          l7-default-backend-678889f899-4q2t5                     1/1     Running                 0          4m37s\nkube-system                          l7-lb-controller-bootstrap-e2e-master                   1/1     Running                 2          4m30s\nkube-system                          metadata-proxy-v0.1-666fv                               2/2     Running                 0          4m27s\nkube-system                          metadata-proxy-v0.1-9nsx7                               2/2     Running                 0          4m26s\nkube-system                          metadata-proxy-v0.1-chbgg                               2/2     Running                 0          4m31s\nkube-system                          metadata-proxy-v0.1-nkdb2                               2/2     Running                 0          4m27s\nkube-system                          metadata-proxy-v0.1-zt754                               2/2     Running                 0          4m26s\nkube-system                          metrics-server-v0.3.6-5f859c87d6-dtqxc                  2/2     Running                 0          3m55s\nkube-system                          volume-snapshot-controller-0                            1/1     Running                 0          4m35s\nkubectl-7630                         pod1g5bncrtz7t                                          0/1     Pending                 0          0s\nnettest-2543                         netserver-0                                             1/1     Running                 0          75s\nnettest-2543                         netserver-1                                             1/1     Running                 0          75s\nnettest-2543                         netserver-2                                             1/1     Running                 0          75s\nnettest-2543                         netserver-3                                             1/1     Running                 0          75s\nnettest-2543                         test-container-pod                                      1/1     Running                 0          38s\npersistent-local-volumes-test-158    hostexec-bootstrap-e2e-minion-group-q10p-zs7w8          1/1     Running                 0          15s\npersistent-local-volumes-test-4682   hostexec-bootstrap-e2e-minion-group-q10p-j2kgb          1/1     Running                 0          8s\npersistent-local-volumes-test-8451   hostexec-bootstrap-e2e-minion-group-q10p-prdbx          1/1     Running                 0          72s\npersistent-local-volumes-test-8451   security-context-81c57741-b951-488f-985a-204e150ae56e   1/1     Terminating             0          20s\npersistent-local-volumes-test-8451   security-context-f4c73856-24ab-45a9-a0d4-2ea953bd87c2   1/1     Terminating             0          38s\nprojected-5454                       labelsupdatec42af265-6ecc-4902-9990-c4de108151c2        0/1     ContainerCreating       0          3s\nprovisioning-2262                    hostexec-bootstrap-e2e-minion-group-vrtv-mc4r8          1/1     Running                 0          38s\nprovisioning-4978                    hostexec-bootstrap-e2e-minion-group-q10p-h5xdl          1/1     Running                 0          74s\nprovisioning-4978                    pod-subpath-test-preprovisionedpv-tlms                  0/1     ContainerCreating       0          2s\nprovisioning-7841                    pod-subpath-test-inlinevolume-4csd                      1/1     Running                 0          23s\nprovisioning-8413                    hostexec-bootstrap-e2e-minion-group-qkcq-ch4n5          1/1     Running                 0          18s\nprovisioning-8742                    gluster-server                                          0/1     ContainerCreating       0          4s\nprovisioning-990                     pvc-volume-tester-reader-hxjzh                          0/1     ContainerCreating       0          11s\nproxy-3473                           proxy-service-b7gm6-njwbl                               0/1     Pending                 0          0s\npv-4462                              nfs-server                                              0/1     ContainerCreating       0          11s\nservices-971                         service-proxy-disabled-flvrl                            1/1     Running                 0          74s\nservices-971                         service-proxy-disabled-mblhb                            1/1     Running                 0          74s\nservices-971                         service-proxy-disabled-s97tt                            1/1     Running                 0          74s\nservices-971                         service-proxy-toggled-gvk2f                             1/1     Running                 0          46s\nservices-971                         service-proxy-toggled-rhvz4                             1/1     Running                 0          46s\nservices-971                         service-proxy-toggled-zh586                             1/1     Running                 0          46s\nsysctl-2620                          sysctl-d11cfdbf-401e-40ad-94a3-17fbb042a280             0/1     ContainerCreating       0          3s\nvolume-3891                          external-provisioner-p5xdr                              1/1     Running                 0          40s\nvolume-3891                          nfs-server                                              1/1     Running                 0          30s\nvolume-4652                          gcepd-client                                            0/1     ContainerCreating       0          11s\nvolume-5786                          gcepd-injector                                          0/1     ContainerCreating       0          13s\nvolume-7834                          gcepd-client                                            1/1     Running                 0          22s\nvolume-8973                          hostexec-bootstrap-e2e-minion-group-qn53-9nw9z          1/1     Running                 0          8s\nvolume-9958                          hostexec-bootstrap-e2e-minion-group-qn53-lxnz8          0/1     ContainerCreating       0          0s\n"
Jan 15 16:15:56.671: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.107.52 --kubeconfig=/workspace/.kube/config get podtemplates --all-namespaces'
Jan 15 16:15:56.997: INFO: stderr: ""
Jan 15 16:15:56.997: INFO: stdout: "NAMESPACE      NAME                CONTAINERS   IMAGES          POD LABELS\nkubectl-7630   pt1nameg5bncrtz7t   container9   fedora:latest   pt=01\n"
... skipping 35 lines ...
Jan 15 16:16:07.614: INFO: stdout: "NAMESPACE      NAME                  DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                                            AGE\nkube-system    fluentd-gcp-v3.2.0    5         5         5       5            5           beta.kubernetes.io/os=linux                                              4m53s\nkube-system    metadata-proxy-v0.1   5         5         5       5            5           beta.kubernetes.io/os=linux,cloud.google.com/metadata-proxy-ready=true   4m50s\nkubectl-7630   ds6g5bncrtz7t         0         0         0       0            0           <none>                                                                   0s\n"
Jan 15 16:16:08.127: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.107.52 --kubeconfig=/workspace/.kube/config get replicasets --all-namespaces'
Jan 15 16:16:08.440: INFO: stderr: ""
Jan 15 16:16:08.440: INFO: stdout: "NAMESPACE      NAME                               DESIRED   CURRENT   READY   AGE\nkube-system    coredns-65567c7b57                 2         2         2       4m54s\nkube-system    event-exporter-v0.3.1-747b47fcd    1         1         1       4m54s\nkube-system    fluentd-gcp-scaler-76d9c77b4d      1         1         1       4m48s\nkube-system    kube-dns-autoscaler-65bc6d4889     1         1         1       4m54s\nkube-system    kubernetes-dashboard-7778f8b456    1         1         1       4m48s\nkube-system    l7-default-backend-678889f899      1         1         1       4m54s\nkube-system    metrics-server-v0.3.6-5f859c87d6   1         1         1       4m7s\nkube-system    metrics-server-v0.3.6-65d4dc878    0         0         0       4m50s\nkubectl-7630   rs3g5bncrtz7t                      1         0         0       1s\n"
Jan 15 16:16:08.801: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.107.52 --kubeconfig=/workspace/.kube/config get events --all-namespaces'
Jan 15 16:16:09.923: INFO: stderr: ""
Jan 15 16:16:09.923: INFO: stdout: "NAMESPACE                            LAST SEEN   TYPE      REASON                       OBJECT                                                           MESSAGE\ncontainer-probe-8849                 48s         Normal    Scheduled                    pod/test-webserver-bccfe720-9fca-4150-8386-32304c5297d7          Successfully assigned container-probe-8849/test-webserver-bccfe720-9fca-4150-8386-32304c5297d7 to bootstrap-e2e-minion-group-qn53\ncontainer-probe-8849                 45s         Normal    Pulling                      pod/test-webserver-bccfe720-9fca-4150-8386-32304c5297d7          Pulling image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\"\ncontainer-probe-8849                 43s         Normal    Pulled                       pod/test-webserver-bccfe720-9fca-4150-8386-32304c5297d7          Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\"\ncontainer-probe-8849                 43s         Normal    Created                      pod/test-webserver-bccfe720-9fca-4150-8386-32304c5297d7          Created container test-webserver\ncontainer-probe-8849                 43s         Normal    Started                      pod/test-webserver-bccfe720-9fca-4150-8386-32304c5297d7          Started container test-webserver\ncsi-mock-volumes-187                 35s         Normal    Scheduled                    pod/csi-inline-volume-gltxz                                      Successfully assigned csi-mock-volumes-187/csi-inline-volume-gltxz to bootstrap-e2e-minion-group-qn53\ncsi-mock-volumes-187                 76s         Normal    Pulling                      pod/csi-mockplugin-0                                             Pulling image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\ncsi-mock-volumes-187                 68s         Normal    Pulled                       pod/csi-mockplugin-0                                             Successfully pulled image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\ncsi-mock-volumes-187                 68s         Normal    Created                      pod/csi-mockplugin-0                                             Created container csi-provisioner\ncsi-mock-volumes-187                 68s         Normal    Started                      pod/csi-mockplugin-0                                             Started container csi-provisioner\ncsi-mock-volumes-187                 68s         Normal    Pulling                      pod/csi-mockplugin-0                                             Pulling image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\ncsi-mock-volumes-187                 64s         Normal    Pulled                       pod/csi-mockplugin-0                                             Successfully pulled image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\ncsi-mock-volumes-187                 63s         Normal    Created                      pod/csi-mockplugin-0                                             Created container driver-registrar\ncsi-mock-volumes-187                 62s         Normal    Started                      pod/csi-mockplugin-0                                             Started container driver-registrar\ncsi-mock-volumes-187                 62s         Normal    Pulling                      pod/csi-mockplugin-0                                             Pulling image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-187                 59s         Normal    Pulled                       pod/csi-mockplugin-0                                             Successfully pulled image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-187                 59s         Normal    Created                      pod/csi-mockplugin-0                                             Created container mock\ncsi-mock-volumes-187                 59s         Normal    Started                      pod/csi-mockplugin-0                                             Started container mock\ncsi-mock-volumes-187                 8s          Normal    Killing                      pod/csi-mockplugin-0                                             Stopping container csi-provisioner\ncsi-mock-volumes-187                 8s          Normal    Killing                      pod/csi-mockplugin-0                                             Stopping container mock\ncsi-mock-volumes-187                 8s          Normal    Killing                      pod/csi-mockplugin-0                                             Stopping container driver-registrar\ncsi-mock-volumes-187                 76s         Normal    Pulling                      pod/csi-mockplugin-attacher-0                                    Pulling image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\ncsi-mock-volumes-187                 68s         Normal    Pulled                       pod/csi-mockplugin-attacher-0                                    Successfully pulled image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\ncsi-mock-volumes-187                 68s         Normal    Created                      pod/csi-mockplugin-attacher-0                                    Created container csi-attacher\ncsi-mock-volumes-187                 67s         Normal    Started                      pod/csi-mockplugin-attacher-0                                    Started container csi-attacher\ncsi-mock-volumes-187                 7s          Normal    Killing                      pod/csi-mockplugin-attacher-0                                    Stopping container csi-attacher\ncsi-mock-volumes-187                 79s         Normal    SuccessfulCreate             statefulset/csi-mockplugin-attacher                              create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-187                 79s         Normal    SuccessfulCreate             statefulset/csi-mockplugin                                       create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-187                 72s         Normal    ExternalProvisioning         persistentvolumeclaim/pvc-mss5l                                  waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-187\" or manually created by system administrator\ncsi-mock-volumes-187                 58s         Normal    Provisioning                 persistentvolumeclaim/pvc-mss5l                                  External provisioner is provisioning volume for claim \"csi-mock-volumes-187/pvc-mss5l\"\ncsi-mock-volumes-187                 58s         Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-mss5l                                  Successfully provisioned volume pvc-4f1ac8b3-2aef-4e91-83ec-060e978bc6cc\ncsi-mock-volumes-187                 55s         Normal    SuccessfulAttachVolume       pod/pvc-volume-tester-t6g8t                                      AttachVolume.Attach succeeded for volume \"pvc-4f1ac8b3-2aef-4e91-83ec-060e978bc6cc\"\ncsi-mock-volumes-187                 37s         Normal    Pulled                       pod/pvc-volume-tester-t6g8t                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-187                 37s         Normal    Created                      pod/pvc-volume-tester-t6g8t                                      Created container volume-tester\ncsi-mock-volumes-187                 36s         Normal    Started                      pod/pvc-volume-tester-t6g8t                                      Started container volume-tester\ncsi-mock-volumes-187                 32s         Normal    Killing                      pod/pvc-volume-tester-t6g8t                                      Stopping container volume-tester\ncsi-mock-volumes-4687                68s         Normal    Pulling                      pod/csi-mockplugin-0                                             Pulling image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\ncsi-mock-volumes-4687                66s         Normal    Pulled                       pod/csi-mockplugin-0                                             Successfully pulled image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\ncsi-mock-volumes-4687                65s         Normal    Created                      pod/csi-mockplugin-0                                             Created container driver-registrar\ncsi-mock-volumes-4687                65s         Normal    Started                      pod/csi-mockplugin-0                                             Started container driver-registrar\ncsi-mock-volumes-4687                65s         Normal    Pulling                      pod/csi-mockplugin-0                                             Pulling image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-4687                62s         Normal    Pulled                       pod/csi-mockplugin-0                                             Successfully pulled image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-4687                61s         Normal    Created                      pod/csi-mockplugin-0                                             Created container mock\ncsi-mock-volumes-4687                61s         Normal    Started                      pod/csi-mockplugin-0                                             Started container mock\ncsi-mock-volumes-4687                13s         Normal    Killing                      pod/csi-mockplugin-0                                             Stopping container csi-provisioner\ncsi-mock-volumes-4687                13s         Normal    Killing                      pod/csi-mockplugin-0                                             Stopping container mock\ncsi-mock-volumes-4687                13s         Normal    Killing                      pod/csi-mockplugin-0                                             Stopping container driver-registrar\ncsi-mock-volumes-4687                74s         Normal    Pulling                      pod/csi-mockplugin-attacher-0                                    Pulling image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\ncsi-mock-volumes-4687                69s         Normal    Pulled                       pod/csi-mockplugin-attacher-0                                    Successfully pulled image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\ncsi-mock-volumes-4687                69s         Normal    Created                      pod/csi-mockplugin-attacher-0                                    Created container csi-attacher\ncsi-mock-volumes-4687                68s         Normal    Started                      pod/csi-mockplugin-attacher-0                                    Started container csi-attacher\ncsi-mock-volumes-4687                13s         Normal    Killing                      pod/csi-mockplugin-attacher-0                                    Stopping container csi-attacher\ncsi-mock-volumes-4687                78s         Normal    SuccessfulCreate             statefulset/csi-mockplugin-attacher                              create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-4687                74s         Normal    Pulling                      pod/csi-mockplugin-resizer-0                                     Pulling image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\ncsi-mock-volumes-4687                71s         Normal    Pulled                       pod/csi-mockplugin-resizer-0                                     Successfully pulled image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\ncsi-mock-volumes-4687                12s         Normal    Created                      pod/csi-mockplugin-resizer-0                                     Created container csi-resizer\ncsi-mock-volumes-4687                11s         Normal    Started                      pod/csi-mockplugin-resizer-0                                     Started container csi-resizer\ncsi-mock-volumes-4687                78s         Normal    SuccessfulCreate             statefulset/csi-mockplugin-resizer                               create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\ncsi-mock-volumes-4687                78s         Normal    SuccessfulCreate             statefulset/csi-mockplugin                                       create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-4687                72s         Normal    ExternalProvisioning         persistentvolumeclaim/pvc-5q9sx                                  waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-4687\" or manually created by system administrator\ncsi-mock-volumes-4687                59s         Normal    Provisioning                 persistentvolumeclaim/pvc-5q9sx                                  External provisioner is provisioning volume for claim \"csi-mock-volumes-4687/pvc-5q9sx\"\ncsi-mock-volumes-4687                59s         Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-5q9sx                                  Successfully provisioned volume pvc-3884a223-847f-44bc-b1a8-ed8e224d61ef\ncsi-mock-volumes-4687                49s         Warning   ExternalExpanding            persistentvolumeclaim/pvc-5q9sx                                  Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\ncsi-mock-volumes-4687                49s         Normal    Resizing                     persistentvolumeclaim/pvc-5q9sx                                  External resizer is resizing volume pvc-3884a223-847f-44bc-b1a8-ed8e224d61ef\ncsi-mock-volumes-4687                48s         Normal    FileSystemResizeRequired     persistentvolumeclaim/pvc-5q9sx                                  Require file system resize of volume on node\ncsi-mock-volumes-4687                33s         Normal    FileSystemResizeSuccessful   persistentvolumeclaim/pvc-5q9sx                                  MountVolume.NodeExpandVolume succeeded for volume \"pvc-3884a223-847f-44bc-b1a8-ed8e224d61ef\"\ncsi-mock-volumes-4687                57s         Normal    SuccessfulAttachVolume       pod/pvc-volume-tester-h67hb                                      AttachVolume.Attach succeeded for volume \"pvc-3884a223-847f-44bc-b1a8-ed8e224d61ef\"\ncsi-mock-volumes-4687                53s         Normal    Pulled                       pod/pvc-volume-tester-h67hb                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-4687                53s         Normal    Created                      pod/pvc-volume-tester-h67hb                                      Created container volume-tester\ncsi-mock-volumes-4687                52s         Normal    Started                      pod/pvc-volume-tester-h67hb                                      Started container volume-tester\ncsi-mock-volumes-4687                46s         Normal    Killing                      pod/pvc-volume-tester-h67hb                                      Stopping container volume-tester\ncsi-mock-volumes-4687                33s         Normal    FileSystemResizeSuccessful   pod/pvc-volume-tester-v64xj                                      MountVolume.NodeExpandVolume succeeded for volume \"pvc-3884a223-847f-44bc-b1a8-ed8e224d61ef\"\ncsi-mock-volumes-4687                31s         Normal    Pulled                       pod/pvc-volume-tester-v64xj                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-4687                31s         Normal    Created                      pod/pvc-volume-tester-v64xj                                      Created container volume-tester\ncsi-mock-volumes-4687                30s         Normal    Started                      pod/pvc-volume-tester-v64xj                                      Started container volume-tester\ncsi-mock-volumes-948                 77s         Normal    Pulling                      pod/csi-mockplugin-0                                             Pulling image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\ncsi-mock-volumes-948                 71s         Normal    Pulled                       pod/csi-mockplugin-0                                             Successfully pulled image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\ncsi-mock-volumes-948                 69s         Normal    Created                      pod/csi-mockplugin-0                                             Created container csi-provisioner\ncsi-mock-volumes-948                 68s         Normal    Started                      pod/csi-mockplugin-0                                             Started container csi-provisioner\ncsi-mock-volumes-948                 68s         Normal    Pulling                      pod/csi-mockplugin-0                                             Pulling image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\ncsi-mock-volumes-948                 66s         Normal    Pulled                       pod/csi-mockplugin-0                                             Successfully pulled image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\ncsi-mock-volumes-948                 65s         Normal    Created                      pod/csi-mockplugin-0                                             Created container driver-registrar\ncsi-mock-volumes-948                 65s         Normal    Started                      pod/csi-mockplugin-0                                             Started container driver-registrar\ncsi-mock-volumes-948                 65s         Normal    Pulling                      pod/csi-mockplugin-0                                             Pulling image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-948                 62s         Normal    Pulled                       pod/csi-mockplugin-0                                             Successfully pulled image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-948                 61s         Normal    Created                      pod/csi-mockplugin-0                                             Created container mock\ncsi-mock-volumes-948                 61s         Normal    Started                      pod/csi-mockplugin-0                                             Started container mock\ncsi-mock-volumes-948                 77s         Normal    Pulling                      pod/csi-mockplugin-resizer-0                                     Pulling image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\ncsi-mock-volumes-948                 71s         Normal    Pulled                       pod/csi-mockplugin-resizer-0                                     Successfully pulled image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\ncsi-mock-volumes-948                 69s         Normal    Created                      pod/csi-mockplugin-resizer-0                                     Created container csi-resizer\ncsi-mock-volumes-948                 68s         Normal    Started                      pod/csi-mockplugin-resizer-0                                     Started container csi-resizer\ncsi-mock-volumes-948                 80s         Normal    SuccessfulCreate             statefulset/csi-mockplugin-resizer                               create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\ncsi-mock-volumes-948                 80s         Normal    SuccessfulCreate             statefulset/csi-mockplugin                                       create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-948                 71s         Normal    ExternalProvisioning         persistentvolumeclaim/pvc-s9csn                                  waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-948\" or manually created by system administrator\ncsi-mock-volumes-948                 59s         Normal    Provisioning                 persistentvolumeclaim/pvc-s9csn                                  External provisioner is provisioning volume for claim \"csi-mock-volumes-948/pvc-s9csn\"\ncsi-mock-volumes-948                 59s         Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-s9csn                                  Successfully provisioned volume pvc-383be570-15ca-4a20-b476-e1e1effeb0c0\ncsi-mock-volumes-948                 51s         Warning   ExternalExpanding            persistentvolumeclaim/pvc-s9csn                                  Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\ncsi-mock-volumes-948                 51s         Normal    Resizing                     persistentvolumeclaim/pvc-s9csn                                  External resizer is resizing volume pvc-383be570-15ca-4a20-b476-e1e1effeb0c0\ncsi-mock-volumes-948                 50s         Normal    FileSystemResizeRequired     persistentvolumeclaim/pvc-s9csn                                  Require file system resize of volume on node\ncsi-mock-volumes-948                 54s         Normal    Pulled                       pod/pvc-volume-tester-w6rhc                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-948                 54s         Normal    Created                      pod/pvc-volume-tester-w6rhc                                      Created container volume-tester\ncsi-mock-volumes-948                 53s         Normal    Started                      pod/pvc-volume-tester-w6rhc                                      Started container volume-tester\ndefault                              4m43s       Normal    RegisteredNode               node/bootstrap-e2e-master                                        Node bootstrap-e2e-master event: Registered Node bootstrap-e2e-master in Controller\ndefault                              4m40s       Normal    Starting                     node/bootstrap-e2e-minion-group-q10p                             Starting kubelet.\ndefault                              4m40s       Normal    NodeHasSufficientMemory      node/bootstrap-e2e-minion-group-q10p                             Node bootstrap-e2e-minion-group-q10p status is now: NodeHasSufficientMemory\ndefault                              4m40s       Normal    NodeHasNoDiskPressure        node/bootstrap-e2e-minion-group-q10p                             Node bootstrap-e2e-minion-group-q10p status is now: NodeHasNoDiskPressure\ndefault                              4m40s       Normal    NodeHasSufficientPID         node/bootstrap-e2e-minion-group-q10p                             Node bootstrap-e2e-minion-group-q10p status is now: NodeHasSufficientPID\ndefault                              4m40s       Normal    NodeAllocatableEnforced      node/bootstrap-e2e-minion-group-q10p                             Updated Node Allocatable limit across pods\ndefault                              4m39s       Normal    NodeReady                    node/bootstrap-e2e-minion-group-q10p                             Node bootstrap-e2e-minion-group-q10p status is now: NodeReady\ndefault                              4m38s       Normal    Starting                     node/bootstrap-e2e-minion-group-q10p                             Starting kube-proxy.\ndefault                              4m38s       Normal    RegisteredNode               node/bootstrap-e2e-minion-group-q10p                             Node bootstrap-e2e-minion-group-q10p event: Registered Node bootstrap-e2e-minion-group-q10p in Controller\ndefault                              4m35s       Warning   ContainerdStart              node/bootstrap-e2e-minion-group-q10p                             Starting containerd container runtime...\ndefault                              4m35s       Warning   DockerStart                  node/bootstrap-e2e-minion-group-q10p                             Starting Docker Application Container Engine...\ndefault                              4m35s       Warning   KubeletStart                 node/bootstrap-e2e-minion-group-q10p                             Started Kubernetes kubelet.\ndefault                              4m40s       Normal    Starting                     node/bootstrap-e2e-minion-group-qkcq                             Starting kubelet.\ndefault                              4m39s       Normal    NodeHasSufficientMemory      node/bootstrap-e2e-minion-group-qkcq                             Node bootstrap-e2e-minion-group-qkcq status is now: NodeHasSufficientMemory\ndefault                              4m39s       Normal    NodeHasNoDiskPressure        node/bootstrap-e2e-minion-group-qkcq                             Node bootstrap-e2e-minion-group-qkcq status is now: NodeHasNoDiskPressure\ndefault                              4m39s       Normal    NodeHasSufficientPID         node/bootstrap-e2e-minion-group-qkcq                             Node bootstrap-e2e-minion-group-qkcq status is now: NodeHasSufficientPID\ndefault                              4m39s       Normal    NodeAllocatableEnforced      node/bootstrap-e2e-minion-group-qkcq                             Updated Node Allocatable limit across pods\ndefault                              4m39s       Normal    NodeReady                    node/bootstrap-e2e-minion-group-qkcq                             Node bootstrap-e2e-minion-group-qkcq status is now: NodeReady\ndefault                              4m38s       Normal    RegisteredNode               node/bootstrap-e2e-minion-group-qkcq                             Node bootstrap-e2e-minion-group-qkcq event: Registered Node bootstrap-e2e-minion-group-qkcq in Controller\ndefault                              4m37s       Normal    Starting                     node/bootstrap-e2e-minion-group-qkcq                             Starting kube-proxy.\ndefault                              4m34s       Warning   ContainerdStart              node/bootstrap-e2e-minion-group-qkcq                             Starting containerd container runtime...\ndefault                              4m34s       Warning   DockerStart                  node/bootstrap-e2e-minion-group-qkcq                             Starting Docker Application Container Engine...\ndefault                              4m34s       Warning   KubeletStart                 node/bootstrap-e2e-minion-group-qkcq                             Started Kubernetes kubelet.\ndefault                              4m41s       Normal    Starting                     node/bootstrap-e2e-minion-group-qn53                             Starting kubelet.\ndefault                              4m40s       Normal    NodeHasSufficientMemory      node/bootstrap-e2e-minion-group-qn53                             Node bootstrap-e2e-minion-group-qn53 status is now: NodeHasSufficientMemory\ndefault                              4m40s       Normal    NodeHasNoDiskPressure        node/bootstrap-e2e-minion-group-qn53                             Node bootstrap-e2e-minion-group-qn53 status is now: NodeHasNoDiskPressure\ndefault                              4m40s       Normal    NodeHasSufficientPID         node/bootstrap-e2e-minion-group-qn53                             Node bootstrap-e2e-minion-group-qn53 status is now: NodeHasSufficientPID\ndefault                              4m40s       Normal    NodeAllocatableEnforced      node/bootstrap-e2e-minion-group-qn53                             Updated Node Allocatable limit across pods\ndefault                              4m40s       Normal    NodeReady                    node/bootstrap-e2e-minion-group-qn53                             Node bootstrap-e2e-minion-group-qn53 status is now: NodeReady\ndefault                              4m38s       Normal    Starting                     node/bootstrap-e2e-minion-group-qn53                             Starting kube-proxy.\ndefault                              4m38s       Normal    RegisteredNode               node/bootstrap-e2e-minion-group-qn53                             Node bootstrap-e2e-minion-group-qn53 event: Registered Node bootstrap-e2e-minion-group-qn53 in Controller\ndefault                              4m36s       Warning   ContainerdStart              node/bootstrap-e2e-minion-group-qn53                             Starting containerd container runtime...\ndefault                              4m35s       Warning   DockerStart                  node/bootstrap-e2e-minion-group-qn53                             Starting Docker Application Container Engine...\ndefault                              4m35s       Warning   KubeletStart                 node/bootstrap-e2e-minion-group-qn53                             Started Kubernetes kubelet.\ndefault                              4m39s       Normal    Starting                     node/bootstrap-e2e-minion-group-vrtv                             Starting kubelet.\ndefault                              4m39s       Normal    NodeHasSufficientMemory      node/bootstrap-e2e-minion-group-vrtv                             Node bootstrap-e2e-minion-group-vrtv status is now: NodeHasSufficientMemory\ndefault                              4m39s       Normal    NodeHasNoDiskPressure        node/bootstrap-e2e-minion-group-vrtv                             Node bootstrap-e2e-minion-group-vrtv status is now: NodeHasNoDiskPressure\ndefault                              4m39s       Normal    NodeHasSufficientPID         node/bootstrap-e2e-minion-group-vrtv                             Node bootstrap-e2e-minion-group-vrtv status is now: NodeHasSufficientPID\ndefault                              4m39s       Normal    NodeAllocatableEnforced      node/bootstrap-e2e-minion-group-vrtv                             Updated Node Allocatable limit across pods\ndefault                              4m38s       Normal    NodeReady                    node/bootstrap-e2e-minion-group-vrtv                             Node bootstrap-e2e-minion-group-vrtv status is now: NodeReady\ndefault                              4m38s       Normal    RegisteredNode               node/bootstrap-e2e-minion-group-vrtv                             Node bootstrap-e2e-minion-group-vrtv event: Registered Node bootstrap-e2e-minion-group-vrtv in Controller\ndefault                              4m36s       Normal    Starting                     node/bootstrap-e2e-minion-group-vrtv                             Starting kube-proxy.\ndefault                              4m35s       Warning   ContainerdStart              node/bootstrap-e2e-minion-group-vrtv                             Starting containerd container runtime...\ndefault                              4m35s       Warning   DockerStart                  node/bootstrap-e2e-minion-group-vrtv                             Starting Docker Application Container Engine...\ndefault                              4m35s       Warning   KubeletStart                 node/bootstrap-e2e-minion-group-vrtv                             Started Kubernetes kubelet.\ndisruption-1462                      2s          Normal    NoPods                       poddisruptionbudget/foo                                          No matching pods found\nephemeral-1794                       72s         Normal    Pulling                      pod/csi-hostpath-attacher-0                                      Pulling image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\nephemeral-1794                       61s         Normal    Pulled                       pod/csi-hostpath-attacher-0                                      Successfully pulled image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\nephemeral-1794                       60s         Normal    Created                      pod/csi-hostpath-attacher-0                                      Created container csi-attacher\nephemeral-1794                       57s         Normal    Started                      pod/csi-hostpath-attacher-0                                      Started container csi-attacher\nephemeral-1794                       79s         Warning   FailedCreate                 statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-1794                       77s         Normal    SuccessfulCreate             statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nephemeral-1794                       72s         Normal    Pulling                      pod/csi-hostpath-provisioner-0                                   Pulling image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\nephemeral-1794                       61s         Normal    Pulled                       pod/csi-hostpath-provisioner-0                                   Successfully pulled image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\nephemeral-1794                       59s         Normal    Created                      pod/csi-hostpath-provisioner-0                                   Created container csi-provisioner\nephemeral-1794                       56s         Normal    Started                      pod/csi-hostpath-provisioner-0                                   Started container csi-provisioner\nephemeral-1794                       79s         Warning   FailedCreate                 statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-1794                       78s         Normal    SuccessfulCreate             statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nephemeral-1794                       72s         Normal    Pulling                      pod/csi-hostpath-resizer-0                                       Pulling image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nephemeral-1794                       61s         Normal    Pulled                       pod/csi-hostpath-resizer-0                                       Successfully pulled image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nephemeral-1794                       61s         Normal    Created                      pod/csi-hostpath-resizer-0                                       Created container csi-resizer\nephemeral-1794                       57s         Normal    Started                      pod/csi-hostpath-resizer-0                                       Started container csi-resizer\nephemeral-1794                       79s         Warning   FailedCreate                 statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-1794                       78s         Normal    SuccessfulCreate             statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nephemeral-1794                       73s         Normal    Pulling                      pod/csi-hostpathplugin-0                                         Pulling image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\nephemeral-1794                       71s         Normal    Pulled                       pod/csi-hostpathplugin-0                                         Successfully pulled image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\nephemeral-1794                       70s         Normal    Created                      pod/csi-hostpathplugin-0                                         Created container node-driver-registrar\nephemeral-1794                       69s         Normal    Started                      pod/csi-hostpathplugin-0                                         Started container node-driver-registrar\nephemeral-1794                       69s         Normal    Pulling                      pod/csi-hostpathplugin-0                                         Pulling image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nephemeral-1794                       58s         Normal    Pulled                       pod/csi-hostpathplugin-0                                         Successfully pulled image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nephemeral-1794                       58s         Normal    Created                      pod/csi-hostpathplugin-0                                         Created container hostpath\nephemeral-1794                       54s         Normal    Started                      pod/csi-hostpathplugin-0                                         Started container hostpath\nephemeral-1794                       54s         Normal    Pulling                      pod/csi-hostpathplugin-0                                         Pulling image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nephemeral-1794                       51s         Normal    Pulled                       pod/csi-hostpathplugin-0                                         Successfully pulled image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nephemeral-1794                       51s         Normal    Created                      pod/csi-hostpathplugin-0                                         Created container liveness-probe\nephemeral-1794                       49s         Normal    Started                      pod/csi-hostpathplugin-0                                         Started container liveness-probe\nephemeral-1794                       79s         Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                                   create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nephemeral-1794                       72s         Normal    Pulling                      pod/csi-snapshotter-0                                            Pulling image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nephemeral-1794                       62s         Normal    Pulled                       pod/csi-snapshotter-0                                            Successfully pulled image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nephemeral-1794                       61s         Normal    Created                      pod/csi-snapshotter-0                                            Created container csi-snapshotter\nephemeral-1794                       57s         Normal    Started                      pod/csi-snapshotter-0                                            Started container csi-snapshotter\nephemeral-1794                       79s         Warning   FailedCreate                 statefulset/csi-snapshotter                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-1794                       78s         Normal    SuccessfulCreate             statefulset/csi-snapshotter                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nephemeral-1794                       62s         Warning   FailedMount                  pod/inline-volume-tester-6f9st                                   MountVolume.SetUp failed for volume \"my-volume-0\" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name csi-hostpath-ephemeral-1794 not found in the list of registered CSI drivers\nephemeral-1794                       43s         Normal    Pulled                       pod/inline-volume-tester-6f9st                                   Container image \"docker.io/library/busybox:1.29\" already present on machine\nephemeral-1794                       43s         Normal    Created                      pod/inline-volume-tester-6f9st                                   Created container csi-volume-tester\nephemeral-1794                       42s         Normal    Started                      pod/inline-volume-tester-6f9st                                   Started container csi-volume-tester\nephemeral-1794                       32s         Normal    Killing                      pod/inline-volume-tester-6f9st                                   Stopping container csi-volume-tester\nephemeral-4116                       76s         Normal    Pulling                      pod/csi-hostpath-attacher-0                                      Pulling image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\nephemeral-4116                       61s         Normal    Pulled                       pod/csi-hostpath-attacher-0                                      Successfully pulled image \"quay.io/k8scsi/csi-attacher:v2.1.0\"\nephemeral-4116                       59s         Normal    Created                      pod/csi-hostpath-attacher-0                                      Created container csi-attacher\nephemeral-4116                       57s         Normal    Started                      pod/csi-hostpath-attacher-0                                      Started container csi-attacher\nephemeral-4116                       1s          Normal    Killing                      pod/csi-hostpath-attacher-0                                      Stopping container csi-attacher\nephemeral-4116                       86s         Warning   FailedCreate                 statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods \"csi-hostpath-attacher-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-4116                       85s         Normal    SuccessfulCreate             statefulset/csi-hostpath-attacher                                create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nephemeral-4116                       76s         Normal    Pulling                      pod/csi-hostpath-provisioner-0                                   Pulling image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\nephemeral-4116                       61s         Normal    Pulled                       pod/csi-hostpath-provisioner-0                                   Successfully pulled image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\nephemeral-4116                       60s         Normal    Created                      pod/csi-hostpath-provisioner-0                                   Created container csi-provisioner\nephemeral-4116                       56s         Normal    Started                      pod/csi-hostpath-provisioner-0                                   Started container csi-provisioner\nephemeral-4116                       86s         Warning   FailedCreate                 statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods \"csi-hostpath-provisioner-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-4116                       85s         Normal    SuccessfulCreate             statefulset/csi-hostpath-provisioner                             create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nephemeral-4116                       75s         Normal    Pulling                      pod/csi-hostpath-resizer-0                                       Pulling image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nephemeral-4116                       61s         Normal    Pulled                       pod/csi-hostpath-resizer-0                                       Successfully pulled image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\nephemeral-4116                       61s         Normal    Created                      pod/csi-hostpath-resizer-0                                       Created container csi-resizer\nephemeral-4116                       57s         Normal    Started                      pod/csi-hostpath-resizer-0                                       Started container csi-resizer\nephemeral-4116                       85s         Warning   FailedCreate                 statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer failed error: pods \"csi-hostpath-resizer-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-4116                       85s         Normal    SuccessfulCreate             statefulset/csi-hostpath-resizer                                 create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nephemeral-4116                       80s         Normal    Pulling                      pod/csi-hostpathplugin-0                                         Pulling image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\nephemeral-4116                       71s         Normal    Pulled                       pod/csi-hostpathplugin-0                                         Successfully pulled image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\nephemeral-4116                       70s         Normal    Created                      pod/csi-hostpathplugin-0                                         Created container node-driver-registrar\nephemeral-4116                       70s         Normal    Started                      pod/csi-hostpathplugin-0                                         Started container node-driver-registrar\nephemeral-4116                       70s         Normal    Pulling                      pod/csi-hostpathplugin-0                                         Pulling image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nephemeral-4116                       58s         Normal    Pulled                       pod/csi-hostpathplugin-0                                         Successfully pulled image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nephemeral-4116                       58s         Normal    Created                      pod/csi-hostpathplugin-0                                         Created container hostpath\nephemeral-4116                       55s         Normal    Started                      pod/csi-hostpathplugin-0                                         Started container hostpath\nephemeral-4116                       55s         Normal    Pulling                      pod/csi-hostpathplugin-0                                         Pulling image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nephemeral-4116                       51s         Normal    Pulled                       pod/csi-hostpathplugin-0                                         Successfully pulled image \"quay.io/k8scsi/livenessprobe:v1.1.0\"\nephemeral-4116                       51s         Normal    Created                      pod/csi-hostpathplugin-0                                         Created container liveness-probe\nephemeral-4116                       49s         Normal    Started                      pod/csi-hostpathplugin-0                                         Started container liveness-probe\nephemeral-4116                       86s         Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                                   create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nephemeral-4116                       78s         Normal    Pulling                      pod/csi-snapshotter-0                                            Pulling image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nephemeral-4116                       63s         Normal    Pulled                       pod/csi-snapshotter-0                                            Successfully pulled image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\nephemeral-4116                       61s         Normal    Created                      pod/csi-snapshotter-0                                            Created container csi-snapshotter\nephemeral-4116                       57s         Normal    Started                      pod/csi-snapshotter-0                                            Started container csi-snapshotter\nephemeral-4116                       85s         Warning   FailedCreate                 statefulset/csi-snapshotter                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter failed error: pods \"csi-snapshotter-0\" is forbidden: unable to validate against any pod security policy: []\nephemeral-4116                       85s         Normal    SuccessfulCreate             statefulset/csi-snapshotter                                      create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nephemeral-4116                       69s         Warning   FailedMount                  pod/inline-volume-tester-s57ft                                   MountVolume.SetUp failed for volume \"my-volume-1\" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name csi-hostpath-ephemeral-4116 not found in the list of registered CSI drivers\nephemeral-4116                       69s         Warning   FailedMount                  pod/inline-volume-tester-s57ft                                   MountVolume.SetUp failed for volume \"my-volume-0\" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name csi-hostpath-ephemeral-4116 not found in the list of registered CSI drivers\nephemeral-4116                       50s         Normal    Pulling                      pod/inline-volume-tester-s57ft                                   Pulling image \"docker.io/library/busybox:1.29\"\nephemeral-4116                       47s         Normal    Pulled                       pod/inline-volume-tester-s57ft                                   Successfully pulled image \"docker.io/library/busybox:1.29\"\nephemeral-4116                       47s         Normal    Created                      pod/inline-volume-tester-s57ft                                   Created container csi-volume-tester\nephemeral-4116                       46s         Normal    Started                      pod/inline-volume-tester-s57ft                                   Started container csi-volume-tester\nephemeral-4116                       41s         Normal    Killing                      pod/inline-volume-tester-s57ft                                   Stopping container csi-volume-tester\ninit-container-7415                  30s         Normal    Scheduled                    pod/pod-init-a6d6f31a-10cf-490f-9efd-27a91145d0ed                Successfully assigned init-container-7415/pod-init-a6d6f31a-10cf-490f-9efd-27a91145d0ed to bootstrap-e2e-minion-group-qn53\ninit-container-7415                  9s          Normal    Pulled                       pod/pod-init-a6d6f31a-10cf-490f-9efd-27a91145d0ed                Container image \"docker.io/library/busybox:1.29\" already present on machine\ninit-container-7415                  8s          Normal    Created                      pod/pod-init-a6d6f31a-10cf-490f-9efd-27a91145d0ed                Created container init1\ninit-container-7415                  7s          Normal    Started                      pod/pod-init-a6d6f31a-10cf-490f-9efd-27a91145d0ed                Started container init1\ninit-container-7415                  6s          Warning   BackOff                      pod/pod-init-a6d6f31a-10cf-490f-9efd-27a91145d0ed                Back-off restarting failed container\nkube-system                          42s         Normal    Scheduled                    pod/coredns-65567c7b57-6q8sq                                     Successfully assigned kube-system/coredns-65567c7b57-6q8sq to bootstrap-e2e-minion-group-q10p\nkube-system                          39s         Normal    Pulling                      pod/coredns-65567c7b57-6q8sq                                     Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          31s         Normal    Pulled                       pod/coredns-65567c7b57-6q8sq                                     Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          31s         Warning   Failed                       pod/coredns-65567c7b57-6q8sq                                     Error: cannot find volume \"config-volume\" to mount into container \"coredns\"\nkube-system                          4m50s       Warning   FailedScheduling             pod/coredns-65567c7b57-kdfdw                                     no nodes available to schedule pods\nkube-system                          4m42s       Warning   FailedScheduling             pod/coredns-65567c7b57-kdfdw                                     0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          4m39s       Warning   FailedScheduling             pod/coredns-65567c7b57-kdfdw                                     0/4 nodes are available: 1 node(s) were unschedulable, 3 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m26s       Warning   FailedScheduling             pod/coredns-65567c7b57-kdfdw                                     0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m15s       Normal    Scheduled                    pod/coredns-65567c7b57-kdfdw                                     Successfully assigned kube-system/coredns-65567c7b57-kdfdw to bootstrap-e2e-minion-group-qn53\nkube-system                          4m14s       Normal    Pulling                      pod/coredns-65567c7b57-kdfdw                                     Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          4m12s       Normal    Pulled                       pod/coredns-65567c7b57-kdfdw                                     Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          4m12s       Normal    Created                      pod/coredns-65567c7b57-kdfdw                                     Created container coredns\nkube-system                          4m12s       Normal    Started                      pod/coredns-65567c7b57-kdfdw                                     Started container coredns\nkube-system                          4m16s       Normal    Scheduled                    pod/coredns-65567c7b57-n7vgj                                     Successfully assigned kube-system/coredns-65567c7b57-n7vgj to bootstrap-e2e-minion-group-qkcq\nkube-system                          4m15s       Normal    Pulling                      pod/coredns-65567c7b57-n7vgj                                     Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          4m13s       Normal    Pulled                       pod/coredns-65567c7b57-n7vgj                                     Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          4m13s       Normal    Created                      pod/coredns-65567c7b57-n7vgj                                     Created container coredns\nkube-system                          4m13s       Normal    Started                      pod/coredns-65567c7b57-n7vgj                                     Started container coredns\nkube-system                          67s         Normal    Scheduled                    pod/coredns-65567c7b57-t4vzb                                     Successfully assigned kube-system/coredns-65567c7b57-t4vzb to bootstrap-e2e-minion-group-vrtv\nkube-system                          65s         Normal    Pulling                      pod/coredns-65567c7b57-t4vzb                                     Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          62s         Normal    Pulled                       pod/coredns-65567c7b57-t4vzb                                     Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          62s         Normal    Created                      pod/coredns-65567c7b57-t4vzb                                     Created container coredns\nkube-system                          61s         Normal    Started                      pod/coredns-65567c7b57-t4vzb                                     Started container coredns\nkube-system                          57s         Normal    Killing                      pod/coredns-65567c7b57-t4vzb                                     Stopping container coredns\nkube-system                          42s         Normal    Scheduled                    pod/coredns-65567c7b57-xvmds                                     Successfully assigned kube-system/coredns-65567c7b57-xvmds to bootstrap-e2e-minion-group-vrtv\nkube-system                          40s         Normal    Pulled                       pod/coredns-65567c7b57-xvmds                                     Container image \"k8s.gcr.io/coredns:1.6.5\" already present on machine\nkube-system                          39s         Normal    Created                      pod/coredns-65567c7b57-xvmds                                     Created container coredns\nkube-system                          38s         Normal    Started                      pod/coredns-65567c7b57-xvmds                                     Started container coredns\nkube-system                          34s         Normal    Killing                      pod/coredns-65567c7b57-xvmds                                     Stopping container coredns\nkube-system                          68s         Normal    Scheduled                    pod/coredns-65567c7b57-zhl2f                                     Successfully assigned kube-system/coredns-65567c7b57-zhl2f to bootstrap-e2e-minion-group-vrtv\nkube-system                          65s         Normal    Pulling                      pod/coredns-65567c7b57-zhl2f                                     Pulling image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          62s         Normal    Pulled                       pod/coredns-65567c7b57-zhl2f                                     Successfully pulled image \"k8s.gcr.io/coredns:1.6.5\"\nkube-system                          62s         Normal    Created                      pod/coredns-65567c7b57-zhl2f                                     Created container coredns\nkube-system                          61s         Normal    Started                      pod/coredns-65567c7b57-zhl2f                                     Started container coredns\nkube-system                          48s         Normal    Killing                      pod/coredns-65567c7b57-zhl2f                                     Stopping container coredns\nkube-system                          45s         Warning   Unhealthy                    pod/coredns-65567c7b57-zhl2f                                     Readiness probe failed: Get http://10.64.4.16:8181/ready: dial tcp 10.64.4.16:8181: connect: connection refused\nkube-system                          4m55s       Warning   FailedCreate                 replicaset/coredns-65567c7b57                                    Error creating: pods \"coredns-65567c7b57-\" is forbidden: no providers available to validate pod request\nkube-system                          4m52s       Warning   FailedCreate                 replicaset/coredns-65567c7b57                                    Error creating: pods \"coredns-65567c7b57-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          4m50s       Normal    SuccessfulCreate             replicaset/coredns-65567c7b57                                    Created pod: coredns-65567c7b57-kdfdw\nkube-system                          4m16s       Normal    SuccessfulCreate             replicaset/coredns-65567c7b57                                    Created pod: coredns-65567c7b57-n7vgj\nkube-system                          68s         Normal    SuccessfulCreate             replicaset/coredns-65567c7b57                                    Created pod: coredns-65567c7b57-zhl2f\nkube-system                          68s         Normal    SuccessfulCreate             replicaset/coredns-65567c7b57                                    Created pod: coredns-65567c7b57-t4vzb\nkube-system                          58s         Normal    SuccessfulDelete             replicaset/coredns-65567c7b57                                    Deleted pod: coredns-65567c7b57-t4vzb\nkube-system                          48s         Normal    SuccessfulDelete             replicaset/coredns-65567c7b57                                    Deleted pod: coredns-65567c7b57-zhl2f\nkube-system                          43s         Normal    SuccessfulCreate             replicaset/coredns-65567c7b57                                    Created pod: coredns-65567c7b57-xvmds\nkube-system                          42s         Normal    SuccessfulCreate             replicaset/coredns-65567c7b57                                    Created pod: coredns-65567c7b57-6q8sq\nkube-system                          34s         Normal    SuccessfulDelete             replicaset/coredns-65567c7b57                                    Deleted pod: coredns-65567c7b57-6q8sq\nkube-system                          34s         Normal    SuccessfulDelete             replicaset/coredns-65567c7b57                                    Deleted pod: coredns-65567c7b57-xvmds\nkube-system                          4m55s       Normal    ScalingReplicaSet            deployment/coredns                                               Scaled up replica set coredns-65567c7b57 to 1\nkube-system                          4m16s       Normal    ScalingReplicaSet            deployment/coredns                                               Scaled up replica set coredns-65567c7b57 to 2\nkube-system                          43s         Normal    ScalingReplicaSet            deployment/coredns                                               Scaled up replica set coredns-65567c7b57 to 4\nkube-system                          58s         Normal    ScalingReplicaSet            deployment/coredns                                               Scaled down replica set coredns-65567c7b57 to 3\nkube-system                          34s         Normal    ScalingReplicaSet            deployment/coredns                                               Scaled down replica set coredns-65567c7b57 to 2\nkube-system                          4m51s       Warning   FailedScheduling             pod/event-exporter-v0.3.1-747b47fcd-js4fh                        no nodes available to schedule pods\nkube-system                          4m41s       Warning   FailedScheduling             pod/event-exporter-v0.3.1-747b47fcd-js4fh                        0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          4m27s       Warning   FailedScheduling             pod/event-exporter-v0.3.1-747b47fcd-js4fh                        0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m16s       Normal    Scheduled                    pod/event-exporter-v0.3.1-747b47fcd-js4fh                        Successfully assigned kube-system/event-exporter-v0.3.1-747b47fcd-js4fh to bootstrap-e2e-minion-group-q10p\nkube-system                          4m14s       Normal    Pulling                      pod/event-exporter-v0.3.1-747b47fcd-js4fh                        Pulling image \"k8s.gcr.io/event-exporter:v0.3.1\"\nkube-system                          4m11s       Normal    Pulled                       pod/event-exporter-v0.3.1-747b47fcd-js4fh                        Successfully pulled image \"k8s.gcr.io/event-exporter:v0.3.1\"\nkube-system                          4m11s       Normal    Created                      pod/event-exporter-v0.3.1-747b47fcd-js4fh                        Created container event-exporter\nkube-system                          4m10s       Normal    Started                      pod/event-exporter-v0.3.1-747b47fcd-js4fh                        Started container event-exporter\nkube-system                          4m10s       Normal    Pulling                      pod/event-exporter-v0.3.1-747b47fcd-js4fh                        Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.7.2\"\nkube-system                          4m8s        Normal    Pulled                       pod/event-exporter-v0.3.1-747b47fcd-js4fh                        Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.7.2\"\nkube-system                          4m8s        Normal    Created                      pod/event-exporter-v0.3.1-747b47fcd-js4fh                        Created container prometheus-to-sd-exporter\nkube-system                          4m8s        Normal    Started                      pod/event-exporter-v0.3.1-747b47fcd-js4fh                        Started container prometheus-to-sd-exporter\nkube-system                          4m55s       Normal    SuccessfulCreate             replicaset/event-exporter-v0.3.1-747b47fcd                       Created pod: event-exporter-v0.3.1-747b47fcd-js4fh\nkube-system                          4m55s       Normal    ScalingReplicaSet            deployment/event-exporter-v0.3.1                                 Scaled up replica set event-exporter-v0.3.1-747b47fcd to 1\nkube-system                          4m49s       Warning   FailedScheduling             pod/fluentd-gcp-scaler-76d9c77b4d-sk5bz                          no nodes available to schedule pods\nkube-system                          4m41s       Warning   FailedScheduling             pod/fluentd-gcp-scaler-76d9c77b4d-sk5bz                          0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          4m30s       Warning   FailedScheduling             pod/fluentd-gcp-scaler-76d9c77b4d-sk5bz                          0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m22s       Normal    Scheduled                    pod/fluentd-gcp-scaler-76d9c77b4d-sk5bz                          Successfully assigned kube-system/fluentd-gcp-scaler-76d9c77b4d-sk5bz to bootstrap-e2e-minion-group-vrtv\nkube-system                          4m21s       Normal    Pulling                      pod/fluentd-gcp-scaler-76d9c77b4d-sk5bz                          Pulling image \"k8s.gcr.io/fluentd-gcp-scaler:0.5.2\"\nkube-system                          4m17s       Normal    Pulled                       pod/fluentd-gcp-scaler-76d9c77b4d-sk5bz                          Successfully pulled image \"k8s.gcr.io/fluentd-gcp-scaler:0.5.2\"\nkube-system                          4m16s       Normal    Created                      pod/fluentd-gcp-scaler-76d9c77b4d-sk5bz                          Created container fluentd-gcp-scaler\nkube-system                          4m16s       Normal    Started                      pod/fluentd-gcp-scaler-76d9c77b4d-sk5bz                          Started container fluentd-gcp-scaler\nkube-system                          4m49s       Normal    SuccessfulCreate             replicaset/fluentd-gcp-scaler-76d9c77b4d                         Created pod: fluentd-gcp-scaler-76d9c77b4d-sk5bz\nkube-system                          4m49s       Normal    ScalingReplicaSet            deployment/fluentd-gcp-scaler                                    Scaled up replica set fluentd-gcp-scaler-76d9c77b4d to 1\nkube-system                          3m31s       Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-6tspz                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-6tspz to bootstrap-e2e-master\nkube-system                          3m30s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-6tspz                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          3m30s       Normal    Created                      pod/fluentd-gcp-v3.2.0-6tspz                                     Created container fluentd-gcp\nkube-system                          3m29s       Normal    Started                      pod/fluentd-gcp-v3.2.0-6tspz                                     Started container fluentd-gcp\nkube-system                          3m29s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-6tspz                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          3m29s       Normal    Created                      pod/fluentd-gcp-v3.2.0-6tspz                                     Created container prometheus-to-sd-exporter\nkube-system                          3m26s       Normal    Started                      pod/fluentd-gcp-v3.2.0-6tspz                                     Started container prometheus-to-sd-exporter\nkube-system                          4m38s       Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-g8wd5                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-g8wd5 to bootstrap-e2e-minion-group-qkcq\nkube-system                          4m37s       Warning   FailedMount                  pod/fluentd-gcp-v3.2.0-g8wd5                                     MountVolume.SetUp failed for volume \"config-volume\" : failed to sync configmap cache: timed out waiting for the condition\nkube-system                          4m37s       Warning   FailedMount                  pod/fluentd-gcp-v3.2.0-g8wd5                                     MountVolume.SetUp failed for volume \"fluentd-gcp-token-5vkfw\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          4m36s       Normal    Pulling                      pod/fluentd-gcp-v3.2.0-g8wd5                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m27s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-g8wd5                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m26s       Normal    Created                      pod/fluentd-gcp-v3.2.0-g8wd5                                     Created container fluentd-gcp\nkube-system                          4m26s       Normal    Started                      pod/fluentd-gcp-v3.2.0-g8wd5                                     Started container fluentd-gcp\nkube-system                          4m26s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-g8wd5                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          4m26s       Normal    Created                      pod/fluentd-gcp-v3.2.0-g8wd5                                     Created container prometheus-to-sd-exporter\nkube-system                          4m25s       Normal    Started                      pod/fluentd-gcp-v3.2.0-g8wd5                                     Started container prometheus-to-sd-exporter\nkube-system                          3m48s       Normal    Killing                      pod/fluentd-gcp-v3.2.0-g8wd5                                     Stopping container fluentd-gcp\nkube-system                          3m48s       Normal    Killing                      pod/fluentd-gcp-v3.2.0-g8wd5                                     Stopping container prometheus-to-sd-exporter\nkube-system                          4m39s       Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-hvwts                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-hvwts to bootstrap-e2e-minion-group-q10p\nkube-system                          4m38s       Warning   FailedMount                  pod/fluentd-gcp-v3.2.0-hvwts                                     MountVolume.SetUp failed for volume \"config-volume\" : failed to sync configmap cache: timed out waiting for the condition\nkube-system                          4m38s       Warning   FailedMount                  pod/fluentd-gcp-v3.2.0-hvwts                                     MountVolume.SetUp failed for volume \"fluentd-gcp-token-5vkfw\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          4m37s       Normal    Pulling                      pod/fluentd-gcp-v3.2.0-hvwts                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m26s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-hvwts                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m25s       Normal    Created                      pod/fluentd-gcp-v3.2.0-hvwts                                     Created container fluentd-gcp\nkube-system                          4m25s       Normal    Started                      pod/fluentd-gcp-v3.2.0-hvwts                                     Started container fluentd-gcp\nkube-system                          4m25s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-hvwts                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          4m25s       Normal    Created                      pod/fluentd-gcp-v3.2.0-hvwts                                     Created container prometheus-to-sd-exporter\nkube-system                          4m24s       Normal    Started                      pod/fluentd-gcp-v3.2.0-hvwts                                     Started container prometheus-to-sd-exporter\nkube-system                          4m14s       Normal    Killing                      pod/fluentd-gcp-v3.2.0-hvwts                                     Stopping container fluentd-gcp\nkube-system                          4m14s       Normal    Killing                      pod/fluentd-gcp-v3.2.0-hvwts                                     Stopping container prometheus-to-sd-exporter\nkube-system                          4m43s       Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-m4h9z                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-m4h9z to bootstrap-e2e-master\nkube-system                          4m34s       Normal    Pulling                      pod/fluentd-gcp-v3.2.0-m4h9z                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m20s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-m4h9z                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m16s       Normal    Created                      pod/fluentd-gcp-v3.2.0-m4h9z                                     Created container fluentd-gcp\nkube-system                          4m16s       Normal    Started                      pod/fluentd-gcp-v3.2.0-m4h9z                                     Started container fluentd-gcp\nkube-system                          4m16s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-m4h9z                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          4m16s       Normal    Created                      pod/fluentd-gcp-v3.2.0-m4h9z                                     Created container prometheus-to-sd-exporter\nkube-system                          4m15s       Normal    Started                      pod/fluentd-gcp-v3.2.0-m4h9z                                     Started container prometheus-to-sd-exporter\nkube-system                          3m38s       Normal    Killing                      pod/fluentd-gcp-v3.2.0-m4h9z                                     Stopping container fluentd-gcp\nkube-system                          3m38s       Normal    Killing                      pod/fluentd-gcp-v3.2.0-m4h9z                                     Stopping container prometheus-to-sd-exporter\nkube-system                          4m38s       Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-mw4rn                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-mw4rn to bootstrap-e2e-minion-group-vrtv\nkube-system                          4m37s       Normal    Pulling                      pod/fluentd-gcp-v3.2.0-mw4rn                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m27s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-mw4rn                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m27s       Normal    Created                      pod/fluentd-gcp-v3.2.0-mw4rn                                     Created container fluentd-gcp\nkube-system                          4m27s       Normal    Started                      pod/fluentd-gcp-v3.2.0-mw4rn                                     Started container fluentd-gcp\nkube-system                          4m27s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-mw4rn                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          4m27s       Normal    Created                      pod/fluentd-gcp-v3.2.0-mw4rn                                     Created container prometheus-to-sd-exporter\nkube-system                          4m27s       Normal    Started                      pod/fluentd-gcp-v3.2.0-mw4rn                                     Started container prometheus-to-sd-exporter\nkube-system                          4m2s        Normal    Killing                      pod/fluentd-gcp-v3.2.0-mw4rn                                     Stopping container fluentd-gcp\nkube-system                          4m2s        Normal    Killing                      pod/fluentd-gcp-v3.2.0-mw4rn                                     Stopping container prometheus-to-sd-exporter\nkube-system                          3m50s       Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-mxnmk                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-mxnmk to bootstrap-e2e-minion-group-vrtv\nkube-system                          3m49s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-mxnmk                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          3m49s       Normal    Created                      pod/fluentd-gcp-v3.2.0-mxnmk                                     Created container fluentd-gcp\nkube-system                          3m49s       Normal    Started                      pod/fluentd-gcp-v3.2.0-mxnmk                                     Started container fluentd-gcp\nkube-system                          3m49s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-mxnmk                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          3m49s       Normal    Created                      pod/fluentd-gcp-v3.2.0-mxnmk                                     Created container prometheus-to-sd-exporter\nkube-system                          3m48s       Normal    Started                      pod/fluentd-gcp-v3.2.0-mxnmk                                     Started container prometheus-to-sd-exporter\nkube-system                          4m39s       Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-pfpj2                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-pfpj2 to bootstrap-e2e-minion-group-qn53\nkube-system                          4m38s       Warning   FailedMount                  pod/fluentd-gcp-v3.2.0-pfpj2                                     MountVolume.SetUp failed for volume \"fluentd-gcp-token-5vkfw\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          4m38s       Warning   FailedMount                  pod/fluentd-gcp-v3.2.0-pfpj2                                     MountVolume.SetUp failed for volume \"config-volume\" : failed to sync configmap cache: timed out waiting for the condition\nkube-system                          4m37s       Normal    Pulling                      pod/fluentd-gcp-v3.2.0-pfpj2                                     Pulling image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m28s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-pfpj2                                     Successfully pulled image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\"\nkube-system                          4m26s       Normal    Created                      pod/fluentd-gcp-v3.2.0-pfpj2                                     Created container fluentd-gcp\nkube-system                          4m26s       Normal    Started                      pod/fluentd-gcp-v3.2.0-pfpj2                                     Started container fluentd-gcp\nkube-system                          4m26s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-pfpj2                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          4m25s       Normal    Created                      pod/fluentd-gcp-v3.2.0-pfpj2                                     Created container prometheus-to-sd-exporter\nkube-system                          4m25s       Normal    Started                      pod/fluentd-gcp-v3.2.0-pfpj2                                     Started container prometheus-to-sd-exporter\nkube-system                          3m25s       Normal    Killing                      pod/fluentd-gcp-v3.2.0-pfpj2                                     Stopping container fluentd-gcp\nkube-system                          3m25s       Normal    Killing                      pod/fluentd-gcp-v3.2.0-pfpj2                                     Stopping container prometheus-to-sd-exporter\nkube-system                          3m10s       Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-t6mk4                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-t6mk4 to bootstrap-e2e-minion-group-qn53\nkube-system                          3m10s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-t6mk4                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          3m10s       Normal    Created                      pod/fluentd-gcp-v3.2.0-t6mk4                                     Created container fluentd-gcp\nkube-system                          3m10s       Normal    Started                      pod/fluentd-gcp-v3.2.0-t6mk4                                     Started container fluentd-gcp\nkube-system                          3m10s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-t6mk4                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          3m9s        Normal    Created                      pod/fluentd-gcp-v3.2.0-t6mk4                                     Created container prometheus-to-sd-exporter\nkube-system                          3m8s        Normal    Started                      pod/fluentd-gcp-v3.2.0-t6mk4                                     Started container prometheus-to-sd-exporter\nkube-system                          4m4s        Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-vqmcb                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-vqmcb to bootstrap-e2e-minion-group-q10p\nkube-system                          4m3s        Normal    Pulled                       pod/fluentd-gcp-v3.2.0-vqmcb                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          4m3s        Normal    Created                      pod/fluentd-gcp-v3.2.0-vqmcb                                     Created container fluentd-gcp\nkube-system                          4m3s        Normal    Started                      pod/fluentd-gcp-v3.2.0-vqmcb                                     Started container fluentd-gcp\nkube-system                          4m3s        Normal    Pulled                       pod/fluentd-gcp-v3.2.0-vqmcb                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          4m3s        Normal    Created                      pod/fluentd-gcp-v3.2.0-vqmcb                                     Created container prometheus-to-sd-exporter\nkube-system                          4m2s        Normal    Started                      pod/fluentd-gcp-v3.2.0-vqmcb                                     Started container prometheus-to-sd-exporter\nkube-system                          3m40s       Normal    Scheduled                    pod/fluentd-gcp-v3.2.0-zcg6h                                     Successfully assigned kube-system/fluentd-gcp-v3.2.0-zcg6h to bootstrap-e2e-minion-group-qkcq\nkube-system                          3m39s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-zcg6h                                     Container image \"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\" already present on machine\nkube-system                          3m39s       Normal    Created                      pod/fluentd-gcp-v3.2.0-zcg6h                                     Created container fluentd-gcp\nkube-system                          3m39s       Normal    Started                      pod/fluentd-gcp-v3.2.0-zcg6h                                     Started container fluentd-gcp\nkube-system                          3m39s       Normal    Pulled                       pod/fluentd-gcp-v3.2.0-zcg6h                                     Container image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\" already present on machine\nkube-system                          3m39s       Normal    Created                      pod/fluentd-gcp-v3.2.0-zcg6h                                     Created container prometheus-to-sd-exporter\nkube-system                          3m38s       Normal    Started                      pod/fluentd-gcp-v3.2.0-zcg6h                                     Started container prometheus-to-sd-exporter\nkube-system                          4m44s       Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-m4h9z\nkube-system                          4m40s       Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-pfpj2\nkube-system                          4m40s       Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-hvwts\nkube-system                          4m39s       Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-g8wd5\nkube-system                          4m38s       Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-mw4rn\nkube-system                          4m14s       Normal    SuccessfulDelete             daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-hvwts\nkube-system                          4m4s        Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-vqmcb\nkube-system                          4m2s        Normal    SuccessfulDelete             daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-mw4rn\nkube-system                          3m50s       Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-mxnmk\nkube-system                          3m48s       Normal    SuccessfulDelete             daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-g8wd5\nkube-system                          3m40s       Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-zcg6h\nkube-system                          3m38s       Normal    SuccessfulDelete             daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-m4h9z\nkube-system                          3m31s       Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                     Created pod: fluentd-gcp-v3.2.0-6tspz\nkube-system                          3m25s       Normal    SuccessfulDelete             daemonset/fluentd-gcp-v3.2.0                                     Deleted pod: fluentd-gcp-v3.2.0-pfpj2\nkube-system                          3m10s       Normal    SuccessfulCreate             daemonset/fluentd-gcp-v3.2.0                                     (combined from similar events): Created pod: fluentd-gcp-v3.2.0-t6mk4\nkube-system                          4m33s       Normal    LeaderElection               configmap/ingress-gce-lock                                       bootstrap-e2e-master_81ba0 became leader\nkube-system                          5m14s       Normal    LeaderElection               endpoints/kube-controller-manager                                bootstrap-e2e-master_197334f0-6e8d-4b10-b666-0e8fc3e0a58b became leader\nkube-system                          5m14s       Normal    LeaderElection               lease/kube-controller-manager                                    bootstrap-e2e-master_197334f0-6e8d-4b10-b666-0e8fc3e0a58b became leader\nkube-system                          46s         Normal    Scheduled                    pod/kube-dns-autoscaler-65bc6d4889-c4f5l                         Successfully assigned kube-system/kube-dns-autoscaler-65bc6d4889-c4f5l to bootstrap-e2e-minion-group-qkcq\nkube-system                          45s         Normal    Pulled                       pod/kube-dns-autoscaler-65bc6d4889-c4f5l                         Container image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\" already present on machine\nkube-system                          45s         Normal    Created                      pod/kube-dns-autoscaler-65bc6d4889-c4f5l                         Created container autoscaler\nkube-system                          44s         Normal    Started                      pod/kube-dns-autoscaler-65bc6d4889-c4f5l                         Started container autoscaler\nkube-system                          4m44s       Warning   FailedScheduling             pod/kube-dns-autoscaler-65bc6d4889-sqctq                         no nodes available to schedule pods\nkube-system                          4m42s       Warning   FailedScheduling             pod/kube-dns-autoscaler-65bc6d4889-sqctq                         0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          4m30s       Warning   FailedScheduling             pod/kube-dns-autoscaler-65bc6d4889-sqctq                         0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m22s       Normal    Scheduled                    pod/kube-dns-autoscaler-65bc6d4889-sqctq                         Successfully assigned kube-system/kube-dns-autoscaler-65bc6d4889-sqctq to bootstrap-e2e-minion-group-qkcq\nkube-system                          4m21s       Normal    Pulling                      pod/kube-dns-autoscaler-65bc6d4889-sqctq                         Pulling image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\"\nkube-system                          4m19s       Normal    Pulled                       pod/kube-dns-autoscaler-65bc6d4889-sqctq                         Successfully pulled image \"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\"\nkube-system                          4m19s       Normal    Created                      pod/kube-dns-autoscaler-65bc6d4889-sqctq                         Created container autoscaler\nkube-system                          4m19s       Normal    Started                      pod/kube-dns-autoscaler-65bc6d4889-sqctq                         Started container autoscaler\nkube-system                          46s         Normal    Killing                      pod/kube-dns-autoscaler-65bc6d4889-sqctq                         Stopping container autoscaler\nkube-system                          4m49s       Warning   FailedCreate                 replicaset/kube-dns-autoscaler-65bc6d4889                        Error creating: pods \"kube-dns-autoscaler-65bc6d4889-\" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount \"kube-dns-autoscaler\" not found\nkube-system                          4m44s       Normal    SuccessfulCreate             replicaset/kube-dns-autoscaler-65bc6d4889                        Created pod: kube-dns-autoscaler-65bc6d4889-sqctq\nkube-system                          46s         Normal    SuccessfulCreate             replicaset/kube-dns-autoscaler-65bc6d4889                        Created pod: kube-dns-autoscaler-65bc6d4889-c4f5l\nkube-system                          4m55s       Normal    ScalingReplicaSet            deployment/kube-dns-autoscaler                                   Scaled up replica set kube-dns-autoscaler-65bc6d4889 to 1\nkube-system                          4m39s       Normal    Pulled                       pod/kube-proxy-bootstrap-e2e-minion-group-q10p                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.755_05209312b74eac\" already present on machine\nkube-system                          4m39s       Normal    Created                      pod/kube-proxy-bootstrap-e2e-minion-group-q10p                   Created container kube-proxy\nkube-system                          4m38s       Normal    Started                      pod/kube-proxy-bootstrap-e2e-minion-group-q10p                   Started container kube-proxy\nkube-system                          4m38s       Normal    Pulled                       pod/kube-proxy-bootstrap-e2e-minion-group-qkcq                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.755_05209312b74eac\" already present on machine\nkube-system                          4m38s       Normal    Created                      pod/kube-proxy-bootstrap-e2e-minion-group-qkcq                   Created container kube-proxy\nkube-system                          4m38s       Normal    Started                      pod/kube-proxy-bootstrap-e2e-minion-group-qkcq                   Started container kube-proxy\nkube-system                          4m39s       Normal    Pulled                       pod/kube-proxy-bootstrap-e2e-minion-group-qn53                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.755_05209312b74eac\" already present on machine\nkube-system                          4m39s       Normal    Created                      pod/kube-proxy-bootstrap-e2e-minion-group-qn53                   Created container kube-proxy\nkube-system                          4m39s       Normal    Started                      pod/kube-proxy-bootstrap-e2e-minion-group-qn53                   Started container kube-proxy\nkube-system                          4m37s       Normal    Pulled                       pod/kube-proxy-bootstrap-e2e-minion-group-vrtv                   Container image \"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.755_05209312b74eac\" already present on machine\nkube-system                          4m37s       Normal    Created                      pod/kube-proxy-bootstrap-e2e-minion-group-vrtv                   Created container kube-proxy\nkube-system                          4m37s       Normal    Started                      pod/kube-proxy-bootstrap-e2e-minion-group-vrtv                   Started container kube-proxy\nkube-system                          5m13s       Normal    LeaderElection               endpoints/kube-scheduler                                         bootstrap-e2e-master_5d7b243b-8849-4a10-baf7-fc0a85897178 became leader\nkube-system                          5m13s       Normal    LeaderElection               lease/kube-scheduler                                             bootstrap-e2e-master_5d7b243b-8849-4a10-baf7-fc0a85897178 became leader\nkube-system                          4m49s       Warning   FailedScheduling             pod/kubernetes-dashboard-7778f8b456-wjltm                        no nodes available to schedule pods\nkube-system                          4m43s       Warning   FailedScheduling             pod/kubernetes-dashboard-7778f8b456-wjltm                        0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          4m40s       Warning   FailedScheduling             pod/kubernetes-dashboard-7778f8b456-wjltm                        0/2 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 1 node(s) were unschedulable.\nkube-system                          4m25s       Warning   FailedScheduling             pod/kubernetes-dashboard-7778f8b456-wjltm                        0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m14s       Normal    Scheduled                    pod/kubernetes-dashboard-7778f8b456-wjltm                        Successfully assigned kube-system/kubernetes-dashboard-7778f8b456-wjltm to bootstrap-e2e-minion-group-qkcq\nkube-system                          4m13s       Normal    Pulling                      pod/kubernetes-dashboard-7778f8b456-wjltm                        Pulling image \"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\"\nkube-system                          4m10s       Normal    Pulled                       pod/kubernetes-dashboard-7778f8b456-wjltm                        Successfully pulled image \"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\"\nkube-system                          4m8s        Normal    Created                      pod/kubernetes-dashboard-7778f8b456-wjltm                        Created container kubernetes-dashboard\nkube-system                          4m8s        Normal    Started                      pod/kubernetes-dashboard-7778f8b456-wjltm                        Started container kubernetes-dashboard\nkube-system                          4m49s       Normal    SuccessfulCreate             replicaset/kubernetes-dashboard-7778f8b456                       Created pod: kubernetes-dashboard-7778f8b456-wjltm\nkube-system                          4m49s       Normal    ScalingReplicaSet            deployment/kubernetes-dashboard                                  Scaled up replica set kubernetes-dashboard-7778f8b456 to 1\nkube-system                          4m49s       Warning   FailedScheduling             pod/l7-default-backend-678889f899-4q2t5                          no nodes available to schedule pods\nkube-system                          4m41s       Warning   FailedScheduling             pod/l7-default-backend-678889f899-4q2t5                          0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          4m31s       Warning   FailedScheduling             pod/l7-default-backend-678889f899-4q2t5                          0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m23s       Normal    Scheduled                    pod/l7-default-backend-678889f899-4q2t5                          Successfully assigned kube-system/l7-default-backend-678889f899-4q2t5 to bootstrap-e2e-minion-group-q10p\nkube-system                          4m14s       Normal    Pulling                      pod/l7-default-backend-678889f899-4q2t5                          Pulling image \"k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0\"\nkube-system                          4m13s       Normal    Pulled                       pod/l7-default-backend-678889f899-4q2t5                          Successfully pulled image \"k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0\"\nkube-system                          4m13s       Normal    Created                      pod/l7-default-backend-678889f899-4q2t5                          Created container default-http-backend\nkube-system                          4m5s        Normal    Started                      pod/l7-default-backend-678889f899-4q2t5                          Started container default-http-backend\nkube-system                          4m55s       Warning   FailedCreate                 replicaset/l7-default-backend-678889f899                         Error creating: pods \"l7-default-backend-678889f899-\" is forbidden: no providers available to validate pod request\nkube-system                          4m52s       Warning   FailedCreate                 replicaset/l7-default-backend-678889f899                         Error creating: pods \"l7-default-backend-678889f899-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          4m50s       Normal    SuccessfulCreate             replicaset/l7-default-backend-678889f899                         Created pod: l7-default-backend-678889f899-4q2t5\nkube-system                          4m55s       Normal    ScalingReplicaSet            deployment/l7-default-backend                                    Scaled up replica set l7-default-backend-678889f899 to 1\nkube-system                          4m47s       Normal    Created                      pod/l7-lb-controller-bootstrap-e2e-master                        Created container l7-lb-controller\nkube-system                          4m44s       Normal    Started                      pod/l7-lb-controller-bootstrap-e2e-master                        Started container l7-lb-controller\nkube-system                          4m48s       Normal    Pulled                       pod/l7-lb-controller-bootstrap-e2e-master                        Container image \"k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1\" already present on machine\nkube-system                          4m39s       Normal    Scheduled                    pod/metadata-proxy-v0.1-666fv                                    Successfully assigned kube-system/metadata-proxy-v0.1-666fv to bootstrap-e2e-minion-group-qn53\nkube-system                          4m38s       Normal    Pulling                      pod/metadata-proxy-v0.1-666fv                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m37s       Normal    Pulled                       pod/metadata-proxy-v0.1-666fv                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m37s       Normal    Created                      pod/metadata-proxy-v0.1-666fv                                    Created container metadata-proxy\nkube-system                          4m36s       Normal    Started                      pod/metadata-proxy-v0.1-666fv                                    Started container metadata-proxy\nkube-system                          4m36s       Normal    Pulling                      pod/metadata-proxy-v0.1-666fv                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m34s       Normal    Pulled                       pod/metadata-proxy-v0.1-666fv                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m32s       Normal    Created                      pod/metadata-proxy-v0.1-666fv                                    Created container prometheus-to-sd-exporter\nkube-system                          4m30s       Normal    Started                      pod/metadata-proxy-v0.1-666fv                                    Started container prometheus-to-sd-exporter\nkube-system                          4m39s       Normal    Scheduled                    pod/metadata-proxy-v0.1-9nsx7                                    Successfully assigned kube-system/metadata-proxy-v0.1-9nsx7 to bootstrap-e2e-minion-group-qkcq\nkube-system                          4m37s       Warning   FailedMount                  pod/metadata-proxy-v0.1-9nsx7                                    MountVolume.SetUp failed for volume \"metadata-proxy-token-mplx6\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          4m35s       Normal    Pulling                      pod/metadata-proxy-v0.1-9nsx7                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m34s       Normal    Pulled                       pod/metadata-proxy-v0.1-9nsx7                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m32s       Normal    Created                      pod/metadata-proxy-v0.1-9nsx7                                    Created container metadata-proxy\nkube-system                          4m31s       Normal    Started                      pod/metadata-proxy-v0.1-9nsx7                                    Started container metadata-proxy\nkube-system                          4m31s       Normal    Pulling                      pod/metadata-proxy-v0.1-9nsx7                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m30s       Normal    Pulled                       pod/metadata-proxy-v0.1-9nsx7                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m29s       Normal    Created                      pod/metadata-proxy-v0.1-9nsx7                                    Created container prometheus-to-sd-exporter\nkube-system                          4m27s       Normal    Started                      pod/metadata-proxy-v0.1-9nsx7                                    Started container prometheus-to-sd-exporter\nkube-system                          4m43s       Normal    Scheduled                    pod/metadata-proxy-v0.1-chbgg                                    Successfully assigned kube-system/metadata-proxy-v0.1-chbgg to bootstrap-e2e-master\nkube-system                          4m41s       Normal    Pulling                      pod/metadata-proxy-v0.1-chbgg                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m40s       Normal    Pulled                       pod/metadata-proxy-v0.1-chbgg                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m40s       Normal    Created                      pod/metadata-proxy-v0.1-chbgg                                    Created container metadata-proxy\nkube-system                          4m39s       Normal    Started                      pod/metadata-proxy-v0.1-chbgg                                    Started container metadata-proxy\nkube-system                          4m39s       Normal    Pulling                      pod/metadata-proxy-v0.1-chbgg                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m37s       Normal    Pulled                       pod/metadata-proxy-v0.1-chbgg                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m36s       Normal    Created                      pod/metadata-proxy-v0.1-chbgg                                    Created container prometheus-to-sd-exporter\nkube-system                          4m35s       Normal    Started                      pod/metadata-proxy-v0.1-chbgg                                    Started container prometheus-to-sd-exporter\nkube-system                          4m39s       Normal    Scheduled                    pod/metadata-proxy-v0.1-nkdb2                                    Successfully assigned kube-system/metadata-proxy-v0.1-nkdb2 to bootstrap-e2e-minion-group-q10p\nkube-system                          4m38s       Warning   FailedMount                  pod/metadata-proxy-v0.1-nkdb2                                    MountVolume.SetUp failed for volume \"metadata-proxy-token-mplx6\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          4m36s       Normal    Pulling                      pod/metadata-proxy-v0.1-nkdb2                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m34s       Normal    Pulled                       pod/metadata-proxy-v0.1-nkdb2                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m32s       Normal    Created                      pod/metadata-proxy-v0.1-nkdb2                                    Created container metadata-proxy\nkube-system                          4m31s       Normal    Started                      pod/metadata-proxy-v0.1-nkdb2                                    Started container metadata-proxy\nkube-system                          4m31s       Normal    Pulling                      pod/metadata-proxy-v0.1-nkdb2                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m30s       Normal    Pulled                       pod/metadata-proxy-v0.1-nkdb2                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m28s       Normal    Created                      pod/metadata-proxy-v0.1-nkdb2                                    Created container prometheus-to-sd-exporter\nkube-system                          4m26s       Normal    Started                      pod/metadata-proxy-v0.1-nkdb2                                    Started container prometheus-to-sd-exporter\nkube-system                          4m38s       Normal    Scheduled                    pod/metadata-proxy-v0.1-zt754                                    Successfully assigned kube-system/metadata-proxy-v0.1-zt754 to bootstrap-e2e-minion-group-vrtv\nkube-system                          4m37s       Warning   FailedMount                  pod/metadata-proxy-v0.1-zt754                                    MountVolume.SetUp failed for volume \"metadata-proxy-token-mplx6\" : failed to sync secret cache: timed out waiting for the condition\nkube-system                          4m34s       Normal    Pulling                      pod/metadata-proxy-v0.1-zt754                                    Pulling image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m32s       Normal    Pulled                       pod/metadata-proxy-v0.1-zt754                                    Successfully pulled image \"k8s.gcr.io/metadata-proxy:v0.1.12\"\nkube-system                          4m31s       Normal    Created                      pod/metadata-proxy-v0.1-zt754                                    Created container metadata-proxy\nkube-system                          4m30s       Normal    Started                      pod/metadata-proxy-v0.1-zt754                                    Started container metadata-proxy\nkube-system                          4m30s       Normal    Pulling                      pod/metadata-proxy-v0.1-zt754                                    Pulling image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m29s       Normal    Pulled                       pod/metadata-proxy-v0.1-zt754                                    Successfully pulled image \"k8s.gcr.io/prometheus-to-sd:v0.5.0\"\nkube-system                          4m28s       Normal    Created                      pod/metadata-proxy-v0.1-zt754                                    Created container prometheus-to-sd-exporter\nkube-system                          4m26s       Normal    Started                      pod/metadata-proxy-v0.1-zt754                                    Started container prometheus-to-sd-exporter\nkube-system                          4m44s       Normal    SuccessfulCreate             daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-chbgg\nkube-system                          4m40s       Normal    SuccessfulCreate             daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-666fv\nkube-system                          4m40s       Normal    SuccessfulCreate             daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-nkdb2\nkube-system                          4m39s       Normal    SuccessfulCreate             daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-9nsx7\nkube-system                          4m38s       Normal    SuccessfulCreate             daemonset/metadata-proxy-v0.1                                    Created pod: metadata-proxy-v0.1-zt754\nkube-system                          4m8s        Normal    Scheduled                    pod/metrics-server-v0.3.6-5f859c87d6-dtqxc                       Successfully assigned kube-system/metrics-server-v0.3.6-5f859c87d6-dtqxc to bootstrap-e2e-minion-group-qkcq\nkube-system                          4m7s        Normal    Pulling                      pod/metrics-server-v0.3.6-5f859c87d6-dtqxc                       Pulling image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          4m6s        Normal    Pulled                       pod/metrics-server-v0.3.6-5f859c87d6-dtqxc                       Successfully pulled image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          4m6s        Normal    Created                      pod/metrics-server-v0.3.6-5f859c87d6-dtqxc                       Created container metrics-server\nkube-system                          4m5s        Normal    Started                      pod/metrics-server-v0.3.6-5f859c87d6-dtqxc                       Started container metrics-server\nkube-system                          4m5s        Normal    Pulling                      pod/metrics-server-v0.3.6-5f859c87d6-dtqxc                       Pulling image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          4m4s        Normal    Pulled                       pod/metrics-server-v0.3.6-5f859c87d6-dtqxc                       Successfully pulled image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          4m4s        Normal    Created                      pod/metrics-server-v0.3.6-5f859c87d6-dtqxc                       Created container metrics-server-nanny\nkube-system                          4m3s        Normal    Started                      pod/metrics-server-v0.3.6-5f859c87d6-dtqxc                       Started container metrics-server-nanny\nkube-system                          4m8s        Normal    SuccessfulCreate             replicaset/metrics-server-v0.3.6-5f859c87d6                      Created pod: metrics-server-v0.3.6-5f859c87d6-dtqxc\nkube-system                          4m51s       Warning   FailedScheduling             pod/metrics-server-v0.3.6-65d4dc878-b8jf8                        no nodes available to schedule pods\nkube-system                          4m42s       Warning   FailedScheduling             pod/metrics-server-v0.3.6-65d4dc878-b8jf8                        0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          4m40s       Warning   FailedScheduling             pod/metrics-server-v0.3.6-65d4dc878-b8jf8                        0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m26s       Warning   FailedScheduling             pod/metrics-server-v0.3.6-65d4dc878-b8jf8                        0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m15s       Normal    Scheduled                    pod/metrics-server-v0.3.6-65d4dc878-b8jf8                        Successfully assigned kube-system/metrics-server-v0.3.6-65d4dc878-b8jf8 to bootstrap-e2e-minion-group-vrtv\nkube-system                          4m14s       Normal    Pulling                      pod/metrics-server-v0.3.6-65d4dc878-b8jf8                        Pulling image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          4m13s       Normal    Pulled                       pod/metrics-server-v0.3.6-65d4dc878-b8jf8                        Successfully pulled image \"k8s.gcr.io/metrics-server-amd64:v0.3.6\"\nkube-system                          4m12s       Normal    Created                      pod/metrics-server-v0.3.6-65d4dc878-b8jf8                        Created container metrics-server\nkube-system                          4m12s       Normal    Started                      pod/metrics-server-v0.3.6-65d4dc878-b8jf8                        Started container metrics-server\nkube-system                          4m12s       Normal    Pulling                      pod/metrics-server-v0.3.6-65d4dc878-b8jf8                        Pulling image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          4m9s        Normal    Pulled                       pod/metrics-server-v0.3.6-65d4dc878-b8jf8                        Successfully pulled image \"k8s.gcr.io/addon-resizer:1.8.7\"\nkube-system                          4m9s        Normal    Created                      pod/metrics-server-v0.3.6-65d4dc878-b8jf8                        Created container metrics-server-nanny\nkube-system                          4m8s        Normal    Started                      pod/metrics-server-v0.3.6-65d4dc878-b8jf8                        Started container metrics-server-nanny\nkube-system                          4m3s        Normal    Killing                      pod/metrics-server-v0.3.6-65d4dc878-b8jf8                        Stopping container metrics-server\nkube-system                          4m3s        Normal    Killing                      pod/metrics-server-v0.3.6-65d4dc878-b8jf8                        Stopping container metrics-server-nanny\nkube-system                          4m51s       Warning   FailedCreate                 replicaset/metrics-server-v0.3.6-65d4dc878                       Error creating: pods \"metrics-server-v0.3.6-65d4dc878-\" is forbidden: unable to validate against any pod security policy: []\nkube-system                          4m51s       Normal    SuccessfulCreate             replicaset/metrics-server-v0.3.6-65d4dc878                       Created pod: metrics-server-v0.3.6-65d4dc878-b8jf8\nkube-system                          4m3s        Normal    SuccessfulDelete             replicaset/metrics-server-v0.3.6-65d4dc878                       Deleted pod: metrics-server-v0.3.6-65d4dc878-b8jf8\nkube-system                          4m51s       Normal    ScalingReplicaSet            deployment/metrics-server-v0.3.6                                 Scaled up replica set metrics-server-v0.3.6-65d4dc878 to 1\nkube-system                          4m8s        Normal    ScalingReplicaSet            deployment/metrics-server-v0.3.6                                 Scaled up replica set metrics-server-v0.3.6-5f859c87d6 to 1\nkube-system                          4m3s        Normal    ScalingReplicaSet            deployment/metrics-server-v0.3.6                                 Scaled down replica set metrics-server-v0.3.6-65d4dc878 to 0\nkube-system                          4m48s       Warning   FailedScheduling             pod/volume-snapshot-controller-0                                 no nodes available to schedule pods\nkube-system                          4m41s       Warning   FailedScheduling             pod/volume-snapshot-controller-0                                 0/1 nodes are available: 1 node(s) were unschedulable.\nkube-system                          4m30s       Warning   FailedScheduling             pod/volume-snapshot-controller-0                                 0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\nkube-system                          4m22s       Normal    Scheduled                    pod/volume-snapshot-controller-0                                 Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-qn53\nkube-system                          4m21s       Normal    Pulling                      pod/volume-snapshot-controller-0                                 Pulling image \"quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\"\nkube-system                          4m18s       Normal    Pulled                       pod/volume-snapshot-controller-0                                 Successfully pulled image \"quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\"\nkube-system                          4m18s       Normal    Created                      pod/volume-snapshot-controller-0                                 Created container volume-snapshot-controller\nkube-system                          4m17s       Normal    Started                      pod/volume-snapshot-controller-0                                 Started container volume-snapshot-controller\nkube-system                          4m48s       Normal    SuccessfulCreate             statefulset/volume-snapshot-controller                           create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful\nkubectl-5626                         9s          Normal    Scheduled                    pod/update-demo-nautilus-546h7                                   Successfully assigned kubectl-5626/update-demo-nautilus-546h7 to bootstrap-e2e-minion-group-qn53\nkubectl-5626                         5s          Normal    Pulling                      pod/update-demo-nautilus-546h7                                   Pulling image \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\"\nkubectl-5626                         4s          Normal    Pulled                       pod/update-demo-nautilus-546h7                                   Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\"\nkubectl-5626                         4s          Normal    Created                      pod/update-demo-nautilus-546h7                                   Created container update-demo\nkubectl-5626                         3s          Normal    Started                      pod/update-demo-nautilus-546h7                                   Started container update-demo\nkubectl-5626                         9s          Normal    Scheduled                    pod/update-demo-nautilus-ht6c4                                   Successfully assigned kubectl-5626/update-demo-nautilus-ht6c4 to bootstrap-e2e-minion-group-vrtv\nkubectl-5626                         7s          Warning   FailedMount                  pod/update-demo-nautilus-ht6c4                                   MountVolume.SetUp failed for volume \"default-token-cxbv8\" : failed to sync secret cache: timed out waiting for the condition\nkubectl-5626                         3s          Normal    Pulling                      pod/update-demo-nautilus-ht6c4                                   Pulling image \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\"\nkubectl-5626                         1s          Normal    Pulled                       pod/update-demo-nautilus-ht6c4                                   Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\"\nkubectl-5626                         9s          Normal    SuccessfulCreate             replicationcontroller/update-demo-nautilus                       Created pod: update-demo-nautilus-ht6c4\nkubectl-5626                         9s          Normal    SuccessfulCreate             replicationcontroller/update-demo-nautilus                       Created pod: update-demo-nautilus-546h7\nkubectl-7630                         3s          Normal    Scheduled                    pod/deployment4g5bncrtz7t-87fd78899-7z2vt                        Successfully assigned kubectl-7630/deployment4g5bncrtz7t-87fd78899-7z2vt to bootstrap-e2e-minion-group-qkcq\nkubectl-7630                         1s          Warning   FailedCreatePodSandBox       pod/deployment4g5bncrtz7t-87fd78899-7z2vt                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod \"deployment4g5bncrtz7t-87fd78899-7z2vt\": Error response from daemon: OCI runtime start failed: cannot start an already running container: unknown\nkubectl-7630                         3s          Normal    SuccessfulCreate             replicaset/deployment4g5bncrtz7t-87fd78899                       Created pod: deployment4g5bncrtz7t-87fd78899-7z2vt\nkubectl-7630                         3s          Normal    ScalingReplicaSet            deployment/deployment4g5bncrtz7t                                 Scaled up replica set deployment4g5bncrtz7t-87fd78899 to 1\nkubectl-7630                         2s          Normal    Scheduled                    pod/ds6g5bncrtz7t-4v9g8                                          Successfully assigned kubectl-7630/ds6g5bncrtz7t-4v9g8 to bootstrap-e2e-minion-group-vrtv\nkubectl-7630                         2s          Normal    Scheduled                    pod/ds6g5bncrtz7t-8vgbk                                          Successfully assigned kubectl-7630/ds6g5bncrtz7t-8vgbk to bootstrap-e2e-minion-group-qkcq\nkubectl-7630                         1s          Normal    Pulling                      pod/ds6g5bncrtz7t-8vgbk                                          Pulling image \"fedora:latest\"\nkubectl-7630                         2s          Normal    Scheduled                    pod/ds6g5bncrtz7t-blff9                                          Successfully assigned kubectl-7630/ds6g5bncrtz7t-blff9 to bootstrap-e2e-minion-group-qn53\nkubectl-7630                         1s          Normal    Pulling                      pod/ds6g5bncrtz7t-blff9                                          Pulling image \"fedora:latest\"\nkubectl-7630                         2s          Normal    Scheduled                    pod/ds6g5bncrtz7t-h2nhl                                          Successfully assigned kubectl-7630/ds6g5bncrtz7t-h2nhl to bootstrap-e2e-minion-group-q10p\nkubectl-7630                         1s          Warning   FailedMount                  pod/ds6g5bncrtz7t-h2nhl                                          MountVolume.SetUp failed for volume \"default-token-fl6tt\" : failed to sync secret cache: timed out waiting for the condition\nkubectl-7630                         2s          Normal    SuccessfulCreate             daemonset/ds6g5bncrtz7t                                          Created pod: ds6g5bncrtz7t-blff9\nkubectl-7630                         2s          Normal    SuccessfulCreate             daemonset/ds6g5bncrtz7t                                          Created pod: ds6g5bncrtz7t-8vgbk\nkubectl-7630                         2s          Normal    SuccessfulCreate             daemonset/ds6g5bncrtz7t                                          Created pod: ds6g5bncrtz7t-h2nhl\nkubectl-7630                         2s          Normal    SuccessfulCreate             daemonset/ds6g5bncrtz7t                                          Created pod: ds6g5bncrtz7t-4v9g8\nkubectl-7630                         <unknown>             Laziness                                                                                      some data here\nkubectl-7630                         4s          Normal    ADD                          ingress/ingress1g5bncrtz7t                                       kubectl-7630/ingress1g5bncrtz7t\nkubectl-7630                         4s          Warning   Translate                    ingress/ingress1g5bncrtz7t                                       error while evaluating the ingress spec: could not find service \"kubectl-7630/service\"\nkubectl-7630                         13s         Warning   FailedScheduling             pod/pod1g5bncrtz7t                                               0/5 nodes are available: 1 node(s) were unschedulable, 4 Insufficient cpu.\nkubectl-7630                         13s         Warning   FailedScheduling             pod/pod1g5bncrtz7t                                               skip schedule deleting pod: kubectl-7630/pod1g5bncrtz7t\nkubectl-7630                         15s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc1g5bncrtz7t                             Failed to provision volume with StorageClass \"standard\": claim.Spec.Selector is not supported for dynamic provisioning on GCE\nkubectl-7630                         12s         Normal    Scheduled                    pod/rc1g5bncrtz7t-d6jnk                                          Successfully assigned kubectl-7630/rc1g5bncrtz7t-d6jnk to bootstrap-e2e-minion-group-vrtv\nkubectl-7630                         9s          Normal    Pulling                      pod/rc1g5bncrtz7t-d6jnk                                          Pulling image \"fedora:latest\"\nkubectl-7630                         12s         Normal    SuccessfulCreate             replicationcontroller/rc1g5bncrtz7t                              Created pod: rc1g5bncrtz7t-d6jnk\nkubectl-7630                         1s          Normal    Scheduled                    pod/rs3g5bncrtz7t-lvttw                                          Successfully assigned kubectl-7630/rs3g5bncrtz7t-lvttw to bootstrap-e2e-minion-group-qkcq\nkubectl-7630                         1s          Normal    SuccessfulCreate             replicaset/rs3g5bncrtz7t                                         Created pod: rs3g5bncrtz7t-lvttw\nkubectl-7630                         3s          Warning   FailedCreate                 statefulset/ss3g5bncrtz7t                                        create Pod ss3g5bncrtz7t-0 in StatefulSet ss3g5bncrtz7t failed error: Pod \"ss3g5bncrtz7t-0\" is invalid: spec.containers: Required value\nnettest-2543                         88s         Normal    Scheduled                    pod/netserver-0                                                  Successfully assigned nettest-2543/netserver-0 to bootstrap-e2e-minion-group-q10p\nnettest-2543                         86s         Normal    Pulling                      pod/netserver-0                                                  Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2543                         68s         Normal    Pulled                       pod/netserver-0                                                  Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2543                         67s         Normal    Created                      pod/netserver-0                                                  Created container webserver\nnettest-2543                         66s         Normal    Started                      pod/netserver-0                                                  Started container webserver\nnettest-2543                         88s         Normal    Scheduled                    pod/netserver-1                                                  Successfully assigned nettest-2543/netserver-1 to bootstrap-e2e-minion-group-qkcq\nnettest-2543                         85s         Normal    Pulling                      pod/netserver-1                                                  Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2543                         79s         Normal    Pulled                       pod/netserver-1                                                  Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2543                         79s         Normal    Created                      pod/netserver-1                                                  Created container webserver\nnettest-2543                         78s         Normal    Started                      pod/netserver-1                                                  Started container webserver\nnettest-2543                         88s         Normal    Scheduled                    pod/netserver-2                                                  Successfully assigned nettest-2543/netserver-2 to bootstrap-e2e-minion-group-qn53\nnettest-2543                         86s         Normal    Pulling                      pod/netserver-2                                                  Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2543                         72s         Normal    Pulled                       pod/netserver-2                                                  Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2543                         71s         Normal    Created                      pod/netserver-2                                                  Created container webserver\nnettest-2543                         70s         Normal    Started                      pod/netserver-2                                                  Started container webserver\nnettest-2543                         87s         Normal    Scheduled                    pod/netserver-3                                                  Successfully assigned nettest-2543/netserver-3 to bootstrap-e2e-minion-group-vrtv\nnettest-2543                         85s         Normal    Pulling                      pod/netserver-3                                                  Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2543                         77s         Normal    Pulled                       pod/netserver-3                                                  Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nnettest-2543                         77s         Normal    Created                      pod/netserver-3                                                  Created container webserver\nnettest-2543                         76s         Normal    Started                      pod/netserver-3                                                  Started container webserver\nnettest-2543                         51s         Normal    Scheduled                    pod/test-container-pod                                           Successfully assigned nettest-2543/test-container-pod to bootstrap-e2e-minion-group-qn53\nnettest-2543                         48s         Normal    Pulled                       pod/test-container-pod                                           Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-2543                         48s         Normal    Created                      pod/test-container-pod                                           Created container webserver\nnettest-2543                         47s         Normal    Started                      pod/test-container-pod                                           Started container webserver\npersistent-local-volumes-test-158    25s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-q10p-zs7w8               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-158    24s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-q10p-zs7w8               Created container agnhost\npersistent-local-volumes-test-158    22s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-q10p-zs7w8               Started container agnhost\npersistent-local-volumes-test-158    17s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-d6q5d                                  no volume plugin matched\npersistent-local-volumes-test-158    9s          Normal    Scheduled                    pod/security-context-5a4bf4a4-4a59-41f1-8cae-59d0e9aae1d0        Successfully assigned persistent-local-volumes-test-158/security-context-5a4bf4a4-4a59-41f1-8cae-59d0e9aae1d0 to bootstrap-e2e-minion-group-q10p\npersistent-local-volumes-test-158    5s          Normal    Pulled                       pod/security-context-5a4bf4a4-4a59-41f1-8cae-59d0e9aae1d0        Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-158    5s          Normal    Created                      pod/security-context-5a4bf4a4-4a59-41f1-8cae-59d0e9aae1d0        Created container write-pod\npersistent-local-volumes-test-158    5s          Normal    Started                      pod/security-context-5a4bf4a4-4a59-41f1-8cae-59d0e9aae1d0        Started container write-pod\npersistent-local-volumes-test-4682   19s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-q10p-j2kgb               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-4682   19s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-q10p-j2kgb               Created container agnhost\npersistent-local-volumes-test-4682   17s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-q10p-j2kgb               Started container agnhost\npersistent-local-volumes-test-4682   7s          Normal    Scheduled                    pod/security-context-a676a0d6-63f9-4646-98df-fcfda01d3b23        Successfully assigned persistent-local-volumes-test-4682/security-context-a676a0d6-63f9-4646-98df-fcfda01d3b23 to bootstrap-e2e-minion-group-q10p\npersistent-local-volumes-test-4682   5s          Normal    Pulled                       pod/security-context-a676a0d6-63f9-4646-98df-fcfda01d3b23        Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-4682   5s          Normal    Created                      pod/security-context-a676a0d6-63f9-4646-98df-fcfda01d3b23        Created container write-pod\npersistent-local-volumes-test-4682   3s          Normal    Started                      pod/security-context-a676a0d6-63f9-4646-98df-fcfda01d3b23        Started container write-pod\npersistent-local-volumes-test-8451   81s         Normal    Pulling                      pod/hostexec-bootstrap-e2e-minion-group-q10p-prdbx               Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\npersistent-local-volumes-test-8451   68s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-q10p-prdbx               Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\npersistent-local-volumes-test-8451   66s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-q10p-prdbx               Created container agnhost\npersistent-local-volumes-test-8451   64s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-q10p-prdbx               Started container agnhost\npersistent-local-volumes-test-8451   33s         Normal    Scheduled                    pod/security-context-81c57741-b951-488f-985a-204e150ae56e        Successfully assigned persistent-local-volumes-test-8451/security-context-81c57741-b951-488f-985a-204e150ae56e to bootstrap-e2e-minion-group-q10p\npersistent-local-volumes-test-8451   29s         Normal    Pulled                       pod/security-context-81c57741-b951-488f-985a-204e150ae56e        Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-8451   29s         Normal    Created                      pod/security-context-81c57741-b951-488f-985a-204e150ae56e        Created container write-pod\npersistent-local-volumes-test-8451   27s         Normal    Started                      pod/security-context-81c57741-b951-488f-985a-204e150ae56e        Started container write-pod\npersistent-local-volumes-test-8451   15s         Normal    Killing                      pod/security-context-81c57741-b951-488f-985a-204e150ae56e        Stopping container write-pod\npersistent-local-volumes-test-8451   51s         Normal    Scheduled                    pod/security-context-f4c73856-24ab-45a9-a0d4-2ea953bd87c2        Successfully assigned persistent-local-volumes-test-8451/security-context-f4c73856-24ab-45a9-a0d4-2ea953bd87c2 to bootstrap-e2e-minion-group-q10p\npersistent-local-volumes-test-8451   45s         Normal    Pulled                       pod/security-context-f4c73856-24ab-45a9-a0d4-2ea953bd87c2        Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-8451   45s         Normal    Created                      pod/security-context-f4c73856-24ab-45a9-a0d4-2ea953bd87c2        Created container write-pod\npersistent-local-volumes-test-8451   43s         Normal    Started                      pod/security-context-f4c73856-24ab-45a9-a0d4-2ea953bd87c2        Started container write-pod\npersistent-local-volumes-test-8451   15s         Normal    Killing                      pod/security-context-f4c73856-24ab-45a9-a0d4-2ea953bd87c2        Stopping container write-pod\nprojected-195                        9s          Normal    Scheduled                    pod/pod-projected-secrets-2d3af552-c15e-49e9-8a3c-180219124726   Successfully assigned projected-195/pod-projected-secrets-2d3af552-c15e-49e9-8a3c-180219124726 to bootstrap-e2e-minion-group-vrtv\nprojected-195                        4s          Normal    Pulled                       pod/pod-projected-secrets-2d3af552-c15e-49e9-8a3c-180219124726   Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-195                        3s          Normal    Created                      pod/pod-projected-secrets-2d3af552-c15e-49e9-8a3c-180219124726   Created container projected-secret-volume-test\nprojected-195                        2s          Normal    Started                      pod/pod-projected-secrets-2d3af552-c15e-49e9-8a3c-180219124726   Started container projected-secret-volume-test\nprojected-5454                       16s         Normal    Scheduled                    pod/labelsupdatec42af265-6ecc-4902-9990-c4de108151c2             Successfully assigned projected-5454/labelsupdatec42af265-6ecc-4902-9990-c4de108151c2 to bootstrap-e2e-minion-group-vrtv\nprojected-5454                       12s         Normal    Pulled                       pod/labelsupdatec42af265-6ecc-4902-9990-c4de108151c2             Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprojected-5454                       12s         Normal    Created                      pod/labelsupdatec42af265-6ecc-4902-9990-c4de108151c2             Created container client-container\nprojected-5454                       11s         Normal    Started                      pod/labelsupdatec42af265-6ecc-4902-9990-c4de108151c2             Started container client-container\nprojected-9350                       4s          Normal    Scheduled                    pod/metadata-volume-4d0335be-beb7-4811-a7dd-0f98e9d31296         Successfully assigned projected-9350/metadata-volume-4d0335be-beb7-4811-a7dd-0f98e9d31296 to bootstrap-e2e-minion-group-vrtv\nprojected-9350                       1s          Normal    Pulled                       pod/metadata-volume-4d0335be-beb7-4811-a7dd-0f98e9d31296         Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2262                    47s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-vrtv-mc4r8               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-2262                    47s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-vrtv-mc4r8               Created container agnhost\nprovisioning-2262                    46s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-vrtv-mc4r8               Started container agnhost\nprovisioning-2262                    12s         Normal    Killing                      pod/hostexec-bootstrap-e2e-minion-group-vrtv-mc4r8               Stopping container agnhost\nprovisioning-2262                    21s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-4s9x                       Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-2262                    21s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-4s9x                       Created container init-volume-preprovisionedpv-4s9x\nprovisioning-2262                    20s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-4s9x                       Started container init-volume-preprovisionedpv-4s9x\nprovisioning-2262                    19s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-4s9x                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-2262                    19s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-4s9x                       Created container test-container-subpath-preprovisionedpv-4s9x\nprovisioning-2262                    18s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-4s9x                       Started container test-container-subpath-preprovisionedpv-4s9x\nprovisioning-2262                    37s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-7lsql                                  storageclass.storage.k8s.io \"provisioning-2262\" not found\nprovisioning-4978                    82s         Normal    Pulling                      pod/hostexec-bootstrap-e2e-minion-group-q10p-h5xdl               Pulling image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nprovisioning-4978                    68s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-q10p-h5xdl               Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\nprovisioning-4978                    66s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-q10p-h5xdl               Created container agnhost\nprovisioning-4978                    65s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-q10p-h5xdl               Started container agnhost\nprovisioning-4978                    32s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-tlms                       Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-4978                    31s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-tlms                       Created container init-volume-preprovisionedpv-tlms\nprovisioning-4978                    29s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-tlms                       Started container init-volume-preprovisionedpv-tlms\nprovisioning-4978                    27s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-tlms                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4978                    27s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-tlms                       Created container test-init-subpath-preprovisionedpv-tlms\nprovisioning-4978                    24s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-tlms                       Started container test-init-subpath-preprovisionedpv-tlms\nprovisioning-4978                    21s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-tlms                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4978                    21s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-tlms                       Created container test-container-subpath-preprovisionedpv-tlms\nprovisioning-4978                    19s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-tlms                       Started container test-container-subpath-preprovisionedpv-tlms\nprovisioning-4978                    11s         Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-tlms                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4978                    11s         Normal    Created                      pod/pod-subpath-test-preprovisionedpv-tlms                       Created container test-container-subpath-preprovisionedpv-tlms\nprovisioning-4978                    11s         Normal    Started                      pod/pod-subpath-test-preprovisionedpv-tlms                       Started container test-container-subpath-preprovisionedpv-tlms\nprovisioning-4978                    48s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-fs6ml                                  storageclass.storage.k8s.io \"provisioning-4978\" not found\nprovisioning-7841                    36s         Normal    Scheduled                    pod/pod-subpath-test-inlinevolume-4csd                           Successfully assigned provisioning-7841/pod-subpath-test-inlinevolume-4csd to bootstrap-e2e-minion-group-vrtv\nprovisioning-7841                    34s         Normal    Pulled                       pod/pod-subpath-test-inlinevolume-4csd                           Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-7841                    33s         Normal    Created                      pod/pod-subpath-test-inlinevolume-4csd                           Created container init-volume-inlinevolume-4csd\nprovisioning-7841                    33s         Normal    Started                      pod/pod-subpath-test-inlinevolume-4csd                           Started container init-volume-inlinevolume-4csd\nprovisioning-7841                    31s         Normal    Pulled                       pod/pod-subpath-test-inlinevolume-4csd                           Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-7841                    31s         Normal    Created                      pod/pod-subpath-test-inlinevolume-4csd                           Created container test-container-subpath-inlinevolume-4csd\nprovisioning-7841                    30s         Normal    Started                      pod/pod-subpath-test-inlinevolume-4csd                           Started container test-container-subpath-inlinevolume-4csd\nprovisioning-8413                    29s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-qkcq-ch4n5               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-8413                    29s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-qkcq-ch4n5               Created container agnhost\nprovisioning-8413                    29s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-qkcq-ch4n5               Started container agnhost\nprovisioning-8413                    8s          Normal    Pulling                      pod/pod-subpath-test-preprovisionedpv-sfj9                       Pulling image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\"\nprovisioning-8413                    7s          Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-sfj9                       Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\"\nprovisioning-8413                    7s          Normal    Created                      pod/pod-subpath-test-preprovisionedpv-sfj9                       Created container test-init-subpath-preprovisionedpv-sfj9\nprovisioning-8413                    7s          Normal    Started                      pod/pod-subpath-test-preprovisionedpv-sfj9                       Started container test-init-subpath-preprovisionedpv-sfj9\nprovisioning-8413                    7s          Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-sfj9                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-8413                    7s          Normal    Created                      pod/pod-subpath-test-preprovisionedpv-sfj9                       Created container test-container-subpath-preprovisionedpv-sfj9\nprovisioning-8413                    6s          Normal    Started                      pod/pod-subpath-test-preprovisionedpv-sfj9                       Started container test-container-subpath-preprovisionedpv-sfj9\nprovisioning-8413                    6s          Normal    Pulled                       pod/pod-subpath-test-preprovisionedpv-sfj9                       Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-8413                    6s          Normal    Created                      pod/pod-subpath-test-preprovisionedpv-sfj9                       Created container test-container-volume-preprovisionedpv-sfj9\nprovisioning-8413                    5s          Normal    Started                      pod/pod-subpath-test-preprovisionedpv-sfj9                       Started container test-container-volume-preprovisionedpv-sfj9\nprovisioning-8413                    27s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-7z74k                                  storageclass.storage.k8s.io \"provisioning-8413\" not found\nprovisioning-8742                    17s         Normal    Scheduled                    pod/gluster-server                                               Successfully assigned provisioning-8742/gluster-server to bootstrap-e2e-minion-group-vrtv\nprovisioning-8742                    13s         Normal    Pulling                      pod/gluster-server                                               Pulling image \"gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0\"\nprovisioning-990                     47s         Normal    WaitForFirstConsumer         persistentvolumeclaim/pvc-h6d94                                  waiting for first consumer to be created before binding\nprovisioning-990                     44s         Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-h6d94                                  Successfully provisioned volume pvc-20a925ae-ee6d-444e-92c2-27a20a1a8194 using kubernetes.io/gce-pd\nprovisioning-990                     7s          Normal    SuccessfulAttachVolume       pod/pvc-volume-tester-reader-hxjzh                               AttachVolume.Attach succeeded for volume \"pvc-20a925ae-ee6d-444e-92c2-27a20a1a8194\"\nprovisioning-990                     1s          Normal    Pulled                       pod/pvc-volume-tester-reader-hxjzh                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-990                     1s          Normal    Created                      pod/pvc-volume-tester-reader-hxjzh                               Created container volume-tester\nprovisioning-990                     42s         Normal    Scheduled                    pod/pvc-volume-tester-writer-h7pcl                               Successfully assigned provisioning-990/pvc-volume-tester-writer-h7pcl to bootstrap-e2e-minion-group-vrtv\nprovisioning-990                     42s         Warning   FailedMount                  pod/pvc-volume-tester-writer-h7pcl                               Unable to attach or mount volumes: unmounted volumes=[my-volume default-token-ql44h], unattached volumes=[my-volume default-token-ql44h]: error processing PVC provisioning-990/pvc-h6d94: failed to fetch PVC from API server: persistentvolumeclaims \"pvc-h6d94\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-vrtv\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"provisioning-990\": no relationship found between node \"bootstrap-e2e-minion-group-vrtv\" and this object\nprovisioning-990                     41s         Warning   FailedMount                  pod/pvc-volume-tester-writer-h7pcl                               MountVolume.SetUp failed for volume \"default-token-ql44h\" : failed to sync secret cache: timed out waiting for the condition\nprovisioning-990                     35s         Normal    SuccessfulAttachVolume       pod/pvc-volume-tester-writer-h7pcl                               AttachVolume.Attach succeeded for volume \"pvc-20a925ae-ee6d-444e-92c2-27a20a1a8194\"\nprovisioning-990                     29s         Normal    Pulled                       pod/pvc-volume-tester-writer-h7pcl                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-990                     28s         Normal    Created                      pod/pvc-volume-tester-writer-h7pcl                               Created container volume-tester\nprovisioning-990                     27s         Normal    Started                      pod/pvc-volume-tester-writer-h7pcl                               Started container volume-tester\nproxy-3473                           13s         Normal    Scheduled                    pod/proxy-service-b7gm6-njwbl                                    Successfully assigned proxy-3473/proxy-service-b7gm6-njwbl to bootstrap-e2e-minion-group-vrtv\nproxy-3473                           10s         Normal    Pulled                       pod/proxy-service-b7gm6-njwbl                                    Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nproxy-3473                           10s         Normal    Created                      pod/proxy-service-b7gm6-njwbl                                    Created container proxy-service-b7gm6\nproxy-3473                           9s          Normal    Started                      pod/proxy-service-b7gm6-njwbl                                    Started container proxy-service-b7gm6\nproxy-3473                           13s         Normal    SuccessfulCreate             replicationcontroller/proxy-service-b7gm6                        Created pod: proxy-service-b7gm6-njwbl\npv-4462                              24s         Normal    Scheduled                    pod/nfs-server                                                   Successfully assigned pv-4462/nfs-server to bootstrap-e2e-minion-group-vrtv\npv-4462                              21s         Normal    Pulling                      pod/nfs-server                                                   Pulling image \"gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0\"\nsysctl-2620                          15s         Normal    Scheduled                    pod/sysctl-d11cfdbf-401e-40ad-94a3-17fbb042a280                  Successfully assigned sysctl-2620/sysctl-d11cfdbf-401e-40ad-94a3-17fbb042a280 to bootstrap-e2e-minion-group-vrtv\nsysctl-2620                          12s         Normal    Pulled                       pod/sysctl-d11cfdbf-401e-40ad-94a3-17fbb042a280                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nsysctl-2620                          12s         Normal    Created                      pod/sysctl-d11cfdbf-401e-40ad-94a3-17fbb042a280                  Created container test-container\nsysctl-2620                          10s         Normal    Started                      pod/sysctl-d11cfdbf-401e-40ad-94a3-17fbb042a280                  Started container test-container\nvolume-1465                          6s          Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-q10p-xp7wx               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-1465                          6s          Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-q10p-xp7wx               Created container agnhost\nvolume-1465                          5s          Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-q10p-xp7wx               Started container agnhost\nvolume-3891                          42s         Normal    LeaderElection               endpoints/example.com-nfs-volume-3891                            external-provisioner-p5xdr_80ab2dc6-3766-41ed-af82-bea706e12bd0 became leader\nvolume-3891                          8s          Normal    Scheduled                    pod/exec-volume-test-preprovisionedpv-rggh                       Successfully assigned volume-3891/exec-volume-test-preprovisionedpv-rggh to bootstrap-e2e-minion-group-vrtv\nvolume-3891                          7s          Warning   FailedMount                  pod/exec-volume-test-preprovisionedpv-rggh                       Unable to attach or mount volumes: unmounted volumes=[vol1 default-token-lhn2x], unattached volumes=[vol1 default-token-lhn2x]: error processing PVC volume-3891/pvc-wzdbt: failed to fetch PVC from API server: persistentvolumeclaims \"pvc-wzdbt\" is forbidden: User \"system:node:bootstrap-e2e-minion-group-vrtv\" cannot get resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"volume-3891\": no relationship found between node \"bootstrap-e2e-minion-group-vrtv\" and this object\nvolume-3891                          7s          Warning   FailedMount                  pod/exec-volume-test-preprovisionedpv-rggh                       MountVolume.SetUp failed for volume \"default-token-lhn2x\" : failed to sync secret cache: timed out waiting for the condition\nvolume-3891                          53s         Normal    Scheduled                    pod/external-provisioner-p5xdr                                   Successfully assigned volume-3891/external-provisioner-p5xdr to bootstrap-e2e-minion-group-qn53\nvolume-3891                          49s         Normal    Pulled                       pod/external-provisioner-p5xdr                                   Container image \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\" already present on machine\nvolume-3891                          49s         Normal    Created                      pod/external-provisioner-p5xdr                                   Created container nfs-provisioner\nvolume-3891                          48s         Normal    Started                      pod/external-provisioner-p5xdr                                   Started container nfs-provisioner\nvolume-3891                          42s         Normal    Scheduled                    pod/nfs-server                                                   Successfully assigned volume-3891/nfs-server to bootstrap-e2e-minion-group-qkcq\nvolume-3891                          41s         Normal    Pulling                      pod/nfs-server                                                   Pulling image \"gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0\"\nvolume-3891                          28s         Normal    Pulled                       pod/nfs-server                                                   Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0\"\nvolume-3891                          27s         Normal    Created                      pod/nfs-server                                                   Created container nfs-server\nvolume-3891                          27s         Normal    Started                      pod/nfs-server                                                   Started container nfs-server\nvolume-3891                          26s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-wzdbt                                  storageclass.storage.k8s.io \"volume-3891\" not found\nvolume-4652                          24s         Normal    Scheduled                    pod/gcepd-client                                                 Successfully assigned volume-4652/gcepd-client to bootstrap-e2e-minion-group-qn53\nvolume-4652                          12s         Normal    SuccessfulAttachVolume       pod/gcepd-client                                                 AttachVolume.Attach succeeded for volume \"gcepd-fzctn\"\nvolume-4652                          4s          Normal    Pulled                       pod/gcepd-client                                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-4652                          3s          Normal    Created                      pod/gcepd-client                                                 Created container gcepd-client\nvolume-4652                          3s          Normal    Started                      pod/gcepd-client                                                 Started container gcepd-client\nvolume-4652                          52s         Normal    Scheduled                    pod/gcepd-injector                                               Successfully assigned volume-4652/gcepd-injector to bootstrap-e2e-minion-group-qn53\nvolume-4652                          45s         Normal    SuccessfulAttachVolume       pod/gcepd-injector                                               AttachVolume.Attach succeeded for volume \"gcepd-fzctn\"\nvolume-4652                          38s         Normal    Pulled                       pod/gcepd-injector                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-4652                          38s         Normal    Created                      pod/gcepd-injector                                               Created container gcepd-injector\nvolume-4652                          37s         Normal    Started                      pod/gcepd-injector                                               Started container gcepd-injector\nvolume-4652                          30s         Normal    Killing                      pod/gcepd-injector                                               Stopping container gcepd-injector\nvolume-4652                          65s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-sn694                                  storageclass.storage.k8s.io \"volume-4652\" not found\nvolume-5786                          26s         Normal    Scheduled                    pod/gcepd-injector                                               Successfully assigned volume-5786/gcepd-injector to bootstrap-e2e-minion-group-vrtv\nvolume-5786                          19s         Normal    SuccessfulAttachVolume       pod/gcepd-injector                                               AttachVolume.Attach succeeded for volume \"gcepd-qvrsx\"\nvolume-5786                          7s          Normal    SuccessfulMountVolume        pod/gcepd-injector                                               MapVolume.MapPodDevice succeeded for volume \"gcepd-qvrsx\" globalMapPath \"/var/lib/kubelet/plugins/kubernetes.io/gce-pd/volumeDevices/bootstrap-e2e-efc400c5-c29c-4219-a84b-ba34f7ede41a\"\nvolume-5786                          7s          Normal    SuccessfulMountVolume        pod/gcepd-injector                                               MapVolume.MapPodDevice succeeded for volume \"gcepd-qvrsx\" volumeMapPath \"/var/lib/kubelet/pods/50203a1e-82b0-46e8-b0a5-5604753712bc/volumeDevices/kubernetes.io~gce-pd\"\nvolume-5786                          3s          Normal    Pulled                       pod/gcepd-injector                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-5786                          3s          Normal    Created                      pod/gcepd-injector                                               Created container gcepd-injector\nvolume-5786                          2s          Normal    Started                      pod/gcepd-injector                                               Started container gcepd-injector\nvolume-5786                          37s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-pqlsx                                  storageclass.storage.k8s.io \"volume-5786\" not found\nvolume-7834                          35s         Normal    Scheduled                    pod/gcepd-client                                                 Successfully assigned volume-7834/gcepd-client to bootstrap-e2e-minion-group-qkcq\nvolume-7834                          23s         Normal    SuccessfulAttachVolume       pod/gcepd-client                                                 AttachVolume.Attach succeeded for volume \"gcepd-kxjk5\"\nvolume-7834                          17s         Normal    Pulled                       pod/gcepd-client                                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-7834                          17s         Normal    Created                      pod/gcepd-client                                                 Created container gcepd-client\nvolume-7834                          17s         Normal    Started                      pod/gcepd-client                                                 Started container gcepd-client\nvolume-7834                          13s         Normal    Killing                      pod/gcepd-client                                                 Stopping container gcepd-client\nvolume-7834                          70s         Normal    Scheduled                    pod/gcepd-injector                                               Successfully assigned volume-7834/gcepd-injector to bootstrap-e2e-minion-group-qn53\nvolume-7834                          63s         Normal    SuccessfulAttachVolume       pod/gcepd-injector                                               AttachVolume.Attach succeeded for volume \"gcepd-kxjk5\"\nvolume-7834                          56s         Normal    Pulled                       pod/gcepd-injector                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-7834                          55s         Normal    Created                      pod/gcepd-injector                                               Created container gcepd-injector\nvolume-7834                          55s         Normal    Started                      pod/gcepd-injector                                               Started container gcepd-injector\nvolume-7834                          44s         Normal    Killing                      pod/gcepd-injector                                               Stopping container gcepd-injector\nvolume-7834                          82s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-42fr5                                  storageclass.storage.k8s.io \"volume-7834\" not found\nvolume-8704                          10s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-qkcq-q66dm               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-8704                          10s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-qkcq-q66dm               Created container agnhost\nvolume-8704                          10s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-qkcq-q66dm               Started container agnhost\nvolume-8704                          3s          Warning   ProvisioningFailed           persistentvolumeclaim/pvc-5jxrg                                  storageclass.storage.k8s.io \"volume-8704\" not found\nvolume-8973                          20s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-qn53-9nw9z               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-8973                          20s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-qn53-9nw9z               Created container agnhost\nvolume-8973                          20s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-qn53-9nw9z               Started container agnhost\nvolume-8973                          4s          Normal    Pulled                       pod/local-injector                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-8973                          4s          Normal    Created                      pod/local-injector                                               Created container local-injector\nvolume-8973                          3s          Normal    Started                      pod/local-injector                                               Started container local-injector\nvolume-8973                          14s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-n8cp7                                  storageclass.storage.k8s.io \"volume-8973\" not found\nvolume-9958                          13s         Normal    Pulled                       pod/hostexec-bootstrap-e2e-minion-group-qn53-lxnz8               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-9958                          13s         Normal    Created                      pod/hostexec-bootstrap-e2e-minion-group-qn53-lxnz8               Created container agnhost\nvolume-9958                          12s         Normal    Started                      pod/hostexec-bootstrap-e2e-minion-group-qn53-lxnz8               Started container agnhost\nvolume-9958                          5s          Warning   ProvisioningFailed           persistentvolumeclaim/pvc-nthmv                                  storageclass.storage.k8s.io \"volume-9958\" not found\n"
Jan 15 16:16:10.949: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.107.52 --kubeconfig=/workspace/.kube/config get horizontalpodautoscalers --all-namespaces'
Jan 15 16:16:11.584: INFO: stderr: ""
Jan 15 16:16:11.585: INFO: stdout: "NAMESPACE      NAME             REFERENCE         TARGETS         MINPODS   MAXPODS   REPLICAS   AGE\nkubectl-7630   hpa2g5bncrtz7t   something/cross   <unknown>/80%   1         3         0          1s\n"
Jan 15 16:16:12.150: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.107.52 --kubeconfig=/workspace/.kube/config get jobs --all-namespaces'
Jan 15 16:16:12.528: INFO: stderr: ""
Jan 15 16:16:12.528: INFO: stdout: "NAMESPACE      NAME             COMPLETIONS   DURATION   AGE\nkubectl-7630   job1g5bncrtz7t   0/1                      1s\n"
... skipping 62 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  kubectl get output
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:424
    should contain custom columns for each resource
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:425
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client kubectl get output should contain custom columns for each resource","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:16:28.021: INFO: >>> kubeConfig: /workspace/.kube/config
[It] watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/protocol.go:45
Jan 15 16:16:28.023: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-api-machinery] client-go should negotiate
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:16:28.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":3,"skipped":11,"failed":0}

SSSS
------------------------------
[BeforeEach] version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 357 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:16:27.518: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename metrics-grabber
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in metrics-grabber-2484
... skipping 7 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:16:28.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-2484" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":3,"skipped":7,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":4,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:16:28.481: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 111 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:16:29.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-9205" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":4,"skipped":15,"failed":0}

SSSSS
------------------------------
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:15:39.261: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-7415
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 15 16:15:39.932: INFO: PodSpec: initContainers in spec.initContainers
Jan 15 16:16:34.290: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a6d6f31a-10cf-490f-9efd-27a91145d0ed", GenerateName:"", Namespace:"init-container-7415", SelfLink:"/api/v1/namespaces/init-container-7415/pods/pod-init-a6d6f31a-10cf-490f-9efd-27a91145d0ed", UID:"c0ab8a56-be3e-4c51-956c-a5b049c44e9d", ResourceVersion:"5185", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714701739, loc:(*time.Location)(0x7d16a20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"932660898"}, Annotations:map[string]string{"kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-68bzb", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0014420c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-68bzb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-68bzb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-68bzb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002bc0c40), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"bootstrap-e2e-minion-group-qn53", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002dee060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002bc0cc0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002bc0ce0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002bc0ce8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002bc0cec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714701740, loc:(*time.Location)(0x7d16a20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714701740, loc:(*time.Location)(0x7d16a20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714701740, loc:(*time.Location)(0x7d16a20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714701739, loc:(*time.Location)(0x7d16a20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.5", PodIP:"10.64.1.27", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.64.1.27"}}, StartTime:(*v1.Time)(0xc001fc6040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0008ea070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0008ea0e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://cb4b789794dd62d5177de3820471029a11b10116210d9984a3fbfc09e1473a7b", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001fc6080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001fc6060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002bc0d6f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:16:34.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7415" for this suite.


• [SLOW TEST:55.352 seconds]
[k8s.io] InitContainer [NodeConformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:34.615: INFO: Driver vsphere doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:16:34.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 60 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:112
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
... skipping 75 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] provisioning
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should provision storage with mount options
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:173
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":2,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:35.120: INFO: Driver cinder doesn't support ntfs -- skipping
... skipping 15 lines ...
      Driver cinder doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl alpha client Kubectl run CronJob should create a CronJob","total":-1,"completed":2,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:14:52.795: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 62 lines ...
STEP: cleaning the environment after gcepd
Jan 15 16:16:19.332: INFO: Deleting pod "gcepd-client" in namespace "volume-4652"
Jan 15 16:16:19.446: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
STEP: Deleting pv and pvc
Jan 15 16:16:25.597: INFO: Deleting PersistentVolumeClaim "pvc-sn694"
Jan 15 16:16:25.665: INFO: Deleting PersistentVolume "gcepd-fzctn"
Jan 15 16:16:27.347: INFO: error deleting PD "bootstrap-e2e-e5cc1b37-568b-4feb-b28c-8137dec6bfcb": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-e5cc1b37-568b-4feb-b28c-8137dec6bfcb' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-qn53', resourceInUseByAnotherResource
Jan 15 16:16:27.347: INFO: Couldn't delete PD "bootstrap-e2e-e5cc1b37-568b-4feb-b28c-8137dec6bfcb", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-e5cc1b37-568b-4feb-b28c-8137dec6bfcb' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-qn53', resourceInUseByAnotherResource
Jan 15 16:16:34.742: INFO: Successfully deleted PD "bootstrap-e2e-e5cc1b37-568b-4feb-b28c-8137dec6bfcb".
Jan 15 16:16:34.742: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:16:34.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4652" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (ext3)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data","total":-1,"completed":3,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:35.438: INFO: Only supported for providers [aws] (not gce)
... skipping 86 lines ...
• [SLOW TEST:14.339 seconds]
[sig-storage] HostPath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:37.589: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:16:37.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 351 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: inline ephemeral CSI volume] ephemeral
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should create read-only inline ephemeral volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:116
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":2,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:42.946: INFO: Driver cinder doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:16:42.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 127 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume one after the other
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:43.208: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 102 lines ...
• [SLOW TEST:10.897 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:48.530: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 180 lines ...
• [SLOW TEST:36.655 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 23 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:168
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":3,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:53.139: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:16:53.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 220 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":2,"skipped":21,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":2,"skipped":10,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:53.357: INFO: Driver local doesn't support ntfs -- skipping
... skipping 15 lines ...
      Driver local doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":4,"skipped":31,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:15:48.513: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-4682
... skipping 57 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume one after the other
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:54.368: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:16:54.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 70 lines ...
• [SLOW TEST:32.735 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:54.991: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 140 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:530
    should expand volume without restarting pod if attach=off, nodeExpansion=on
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:545
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":2,"skipped":10,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 39 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:16:55.890: INFO: Only supported for providers [aws] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:16:55.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 100 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":12,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 33 lines ...
• [SLOW TEST:9.786 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":6,"skipped":69,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 28 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43
    files with FSGroup ownership should support (root,0644,tmpfs)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:62
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":3,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:17:01.613: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 69 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl apply
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:922
    apply set/view last-applied
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:959
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":4,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:17:02.617: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 156 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":3,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 16 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command in a pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:17:06.283: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 135 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":4,"skipped":31,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:33.310 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a private image
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:97
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a private image","total":-1,"completed":3,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:17:10.486: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:17:10.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 73 lines ...
• [SLOW TEST:7.073 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":13,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:17:13.343: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 70 lines ...
• [SLOW TEST:21.235 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":4,"skipped":22,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
... skipping 67 lines ...
• [SLOW TEST:21.873 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:17:15.242: INFO: Driver csi-hostpath doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:17:15.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 105 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 89 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:419
    should expand volume without restarting pod if nodeExpansion=off
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:448
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":4,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:17:17.655: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 109 lines ...
STEP: cleaning the environment after gcepd
Jan 15 16:16:55.968: INFO: Deleting pod "gcepd-client" in namespace "volume-5786"
Jan 15 16:16:56.135: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
STEP: Deleting pv and pvc
Jan 15 16:17:06.490: INFO: Deleting PersistentVolumeClaim "pvc-pqlsx"
Jan 15 16:17:06.589: INFO: Deleting PersistentVolume "gcepd-qvrsx"
Jan 15 16:17:08.230: INFO: error deleting PD "bootstrap-e2e-efc400c5-c29c-4219-a84b-ba34f7ede41a": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-efc400c5-c29c-4219-a84b-ba34f7ede41a' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-vrtv', resourceInUseByAnotherResource
Jan 15 16:17:08.230: INFO: Couldn't delete PD "bootstrap-e2e-efc400c5-c29c-4219-a84b-ba34f7ede41a", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-efc400c5-c29c-4219-a84b-ba34f7ede41a' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-vrtv', resourceInUseByAnotherResource
Jan 15 16:17:14.736: INFO: error deleting PD "bootstrap-e2e-efc400c5-c29c-4219-a84b-ba34f7ede41a": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-efc400c5-c29c-4219-a84b-ba34f7ede41a' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-vrtv', resourceInUseByAnotherResource
Jan 15 16:17:14.736: INFO: Couldn't delete PD "bootstrap-e2e-efc400c5-c29c-4219-a84b-ba34f7ede41a", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-efc400c5-c29c-4219-a84b-ba34f7ede41a' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-vrtv', resourceInUseByAnotherResource
Jan 15 16:17:22.357: INFO: Successfully deleted PD "bootstrap-e2e-efc400c5-c29c-4219-a84b-ba34f7ede41a".
Jan 15 16:17:22.357: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:17:22.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-5786" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":2,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:17:23.192: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 71 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":28,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 120 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":7,"skipped":42,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 62 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":8,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:17:26.473: INFO: Driver cinder doesn't support ntfs -- skipping
... skipping 218 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":4,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:12.684 seconds]
[sig-storage] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":24,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:17:27.634: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 43 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : projected
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":5,"skipped":28,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 28 lines ...
• [SLOW TEST:16.438 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":9,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:17:33.357: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:17:33.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 168 lines ...
• [SLOW TEST:6.992 seconds]
[sig-storage] PV Protection
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify "immediate" deletion of a PV that is not bound to a PVC
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:98
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":5,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for API chunking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 76 lines ...
• [SLOW TEST:31.486 seconds]
[sig-api-machinery] Servers with support for API chunking
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return chunks of results for list calls
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:77
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":5,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:17:15.271: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-6056
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:17:34.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6056" for this suite.


• [SLOW TEST:20.229 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":4,"skipped":38,"failed":0}

SS
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:16:44.541: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 57 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:17:39.315: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 192 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:121
    with Single PV - PVC pairs
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:154
      create a PVC and a pre-bound PV: test write access
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:186
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:17:42.652: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:17:42.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 113 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:17:45.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4664" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":3,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:17:45.513: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 179 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume one after the other
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":18,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:17:55.875: INFO: Driver cinder doesn't support ext4 -- skipping
... skipping 109 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:17:56.689: INFO: Driver cinder doesn't support ntfs -- skipping
... skipping 190 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":5,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:17:58.555: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:17:58.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 90 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":22,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:02.635: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 118 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":7,"skipped":71,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:04.122: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:18:04.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 185 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":2,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:06.557: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 83 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:525
    should support exec through an HTTP proxy
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:585
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":6,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:08.157: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:18:08.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 115 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:525
    should support inline execution and attach
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:688
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":5,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:08.444: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:18:08.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 573 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:419
    should expand volume by restarting pod if attach=off, nodeExpansion=on
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:448
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":4,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 38 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":5,"skipped":33,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:19.437: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 120 lines ...
Jan 15 16:18:02.843: INFO: Trying to get logs from node bootstrap-e2e-minion-group-qkcq pod exec-volume-test-inlinevolume-f6q5 container exec-container-inlinevolume-f6q5: <nil>
STEP: delete the pod
Jan 15 16:18:03.857: INFO: Waiting for pod exec-volume-test-inlinevolume-f6q5 to disappear
Jan 15 16:18:04.058: INFO: Pod exec-volume-test-inlinevolume-f6q5 no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-f6q5
Jan 15 16:18:04.058: INFO: Deleting pod "exec-volume-test-inlinevolume-f6q5" in namespace "volume-8420"
Jan 15 16:18:05.741: INFO: error deleting PD "bootstrap-e2e-aa075f15-0963-4d9c-a88b-a2b2c32a2dde": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-aa075f15-0963-4d9c-a88b-a2b2c32a2dde' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-qkcq', resourceInUseByAnotherResource
Jan 15 16:18:05.741: INFO: Couldn't delete PD "bootstrap-e2e-aa075f15-0963-4d9c-a88b-a2b2c32a2dde", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-aa075f15-0963-4d9c-a88b-a2b2c32a2dde' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-qkcq', resourceInUseByAnotherResource
Jan 15 16:18:12.248: INFO: error deleting PD "bootstrap-e2e-aa075f15-0963-4d9c-a88b-a2b2c32a2dde": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-aa075f15-0963-4d9c-a88b-a2b2c32a2dde' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-qkcq', resourceInUseByAnotherResource
Jan 15 16:18:12.248: INFO: Couldn't delete PD "bootstrap-e2e-aa075f15-0963-4d9c-a88b-a2b2c32a2dde", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-aa075f15-0963-4d9c-a88b-a2b2c32a2dde' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-qkcq', resourceInUseByAnotherResource
Jan 15 16:18:19.628: INFO: Successfully deleted PD "bootstrap-e2e-aa075f15-0963-4d9c-a88b-a2b2c32a2dde".
Jan 15 16:18:19.628: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:18:19.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8420" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (ext4)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":8,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:20.130: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:18:20.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 77 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:18:20.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-3444" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":5,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:21.139: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 155 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1462
    should be able to retrieve and filter logs  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:17:11.961: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 77 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":5,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:22.353: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 37 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152

      Driver nfs doesn't support ext3 -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":4,"skipped":40,"failed":0}
[BeforeEach] [sig-storage] Zone Support
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:18:21.585: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename zone-support
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in zone-support-9381
... skipping 160 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:18:23.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8710" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":-1,"completed":9,"skipped":52,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 49 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":6,"skipped":42,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 80 lines ...
• [SLOW TEST:21.022 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":7,"skipped":36,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:29.198: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 57 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should provision a volume and schedule a pod with AllowedTopologies
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:163
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":10,"skipped":61,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: udp","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [sig-apps] CronJob
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:16:38.547: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename cronjob
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-2168
... skipping 17 lines ...
• [SLOW TEST:116.323 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete successful finished jobs with limit of one successful job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:234
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete successful finished jobs with limit of one successful job","total":-1,"completed":2,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 4 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:18:34.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":11,"skipped":63,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:34.943: INFO: Driver vsphere doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:18:34.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 46 lines ...
• [SLOW TEST:11.047 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:35.063: INFO: Only supported for providers [vsphere] (not gce)
... skipping 131 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  [k8s.io] [sig-node] Clean up pods on node
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    kubelet should be able to delete 10 pods per node in 1m0s.
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:340
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":6,"skipped":33,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:35.369: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping
... skipping 135 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:38.533: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 15 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":5,"skipped":40,"failed":0}
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:18:27.500: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-611
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 15 16:18:29.537: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:18:40.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-611" for this suite.


• [SLOW TEST:13.943 seconds]
[k8s.io] InitContainer [NodeConformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":6,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:19.253 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should contain environment variables for services [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:41.614: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:18:41.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 180 lines ...
• [SLOW TEST:6.732 seconds]
[sig-api-machinery] Discovery
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Custom resource should have storage version hash
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:44
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":3,"skipped":12,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:17:58.403: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 74 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":6,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:48.541: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 31 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:18:48.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-3262" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":4,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:49.252: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:18:49.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 27 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver hostPath doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 78 lines ...
• [SLOW TEST:40.294 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":6,"skipped":76,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:50.109: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 13 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":49,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:18:49.471: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 20 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193

      Driver "csi-hostpath" does not support exec - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:94
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":31,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:17:45.615: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-5106
... skipping 88 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:240
    should require VolumeAttach for drivers with attachment
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:262
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":5,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:55.843: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 131 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and read from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":5,"skipped":54,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:56.397: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 99 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:18:57.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-8992" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":6,"skipped":70,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:18:57.585: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 65 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : configmap
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":10,"skipped":54,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:17:26.610: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 86 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":6,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:02.543: INFO: Distro gci doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:19:02.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 91 lines ...
• [SLOW TEST:14.927 seconds]
[sig-storage] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:03.472: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:19:03.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 178 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":72,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:18:21.472: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 44 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":72,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 78 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":6,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:05.308: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 48 lines ...
• [SLOW TEST:16.297 seconds]
[sig-storage] Projected configMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:05.563: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 140 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:19:05.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7859" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":7,"skipped":18,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:05.901: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 52 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Only supported for providers [vsphere] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1383
------------------------------
... skipping 59 lines ...
[sig-storage] CSI Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver "csi-hostpath" does not support topology - skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:96
------------------------------
... skipping 30 lines ...
• [SLOW TEST:11.642 seconds]
[sig-storage] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":50,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:06.094: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 79 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing single file [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:217
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":6,"skipped":43,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:06.459: INFO: Only supported for providers [azure] (not gce)
... skipping 167 lines ...
  Only supported for providers [vsphere] (not gce)

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/vsphere_zone_support.go:106
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":4,"skipped":24,"failed":0}
[BeforeEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:18:42.540: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-8394
... skipping 28 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should have a working scale subresource [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":5,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:08.306: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 15 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":8,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:93
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
... skipping 44 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      Verify if offline PVC expansion works
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:162
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":9,"skipped":50,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:08.413: INFO: Only supported for providers [openstack] (not gce)
... skipping 63 lines ...
      Distro gci doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:159
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":37,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:18:42.526: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-7982
... skipping 31 lines ...
• [SLOW TEST:26.228 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":7,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:08.756: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:19:08.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 86 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":38,"failed":0}
[BeforeEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:19:04.060: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-4798
... skipping 13 lines ...
• [SLOW TEST:12.321 seconds]
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should get a host IP [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":38,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:16.397: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 76 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 29 lines ...
• [SLOW TEST:11.286 seconds]
[k8s.io] [sig-node] Security Context
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:68
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":8,"skipped":35,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 38 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:444
    that expects NO client request
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454
      should support a client that connects, sends DATA, and disconnects
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:455
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":6,"skipped":49,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 24 lines ...
• [SLOW TEST:11.082 seconds]
[k8s.io] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":62,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:19.514: INFO: Only supported for providers [vsphere] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:19:19.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 33 lines ...
Jan 15 16:18:35.104: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-nvkzq] to have phase Bound
Jan 15 16:18:35.266: INFO: PersistentVolumeClaim pvc-nvkzq found but phase is Pending instead of Bound.
Jan 15 16:18:37.557: INFO: PersistentVolumeClaim pvc-nvkzq found and phase=Bound (2.45296729s)
Jan 15 16:18:37.557: INFO: Waiting up to 3m0s for PersistentVolume gce-tksmh to have phase Bound
Jan 15 16:18:37.803: INFO: PersistentVolume gce-tksmh found and phase=Bound (246.736605ms)
STEP: Creating the Client Pod
[It] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:139
STEP: Deleting the Persistent Volume
Jan 15 16:18:56.741: INFO: Deleting PersistentVolume "gce-tksmh"
STEP: Deleting the client pod
Jan 15 16:18:57.117: INFO: Deleting pod "pvc-tester-4z4rw" in namespace "pv-8543"
Jan 15 16:18:57.215: INFO: Wait up to 5m0s for pod "pvc-tester-4z4rw" to be fully deleted
... skipping 14 lines ...
Jan 15 16:19:20.848: INFO: Successfully deleted PD "bootstrap-e2e-f66adfeb-ec6c-4a05-bf0a-9126a3532aa6".


• [SLOW TEST:51.647 seconds]
[sig-storage] PersistentVolumes GCEPD
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:139
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes GCEPD should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach","total":-1,"completed":8,"skipped":43,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:20.859: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 49 lines ...
• [SLOW TEST:14.493 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":63,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Zone Support
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 117 lines ...
• [SLOW TEST:14.247 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should release NodePorts on delete
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1873
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":8,"skipped":39,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 38 lines ...
• [SLOW TEST:33.264 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":7,"skipped":82,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 59 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should implement legacy replacement when the update strategy is OnDelete
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:495
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","total":-1,"completed":4,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 19 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when running a container with a new image
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:263
      should not be able to pull from private registry without secret [NodeConformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:380
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":7,"skipped":51,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:32.447: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 91 lines ...
• [SLOW TEST:253.066 seconds]
[k8s.io] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Networking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 131 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:161
    should function for pod-Service: http
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:163
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: http","total":-1,"completed":3,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:33.713: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 70 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:19:34.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5186" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":8,"skipped":66,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:34.710: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 262 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":7,"skipped":77,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:35.055: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 74 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":7,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:35.614: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:19:35.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 40 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when running a container with a new image
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:263
      should not be able to pull image from invalid registry [NodeConformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:369
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":8,"skipped":84,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:35.802: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 125 lines ...
• [SLOW TEST:16.890 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:507
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]","total":-1,"completed":9,"skipped":50,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 11 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1203
STEP: Creating statefulset with conflicting port in namespace statefulset-1203
STEP: Waiting until pod test-pod will start running in namespace statefulset-1203
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1203
Jan 15 16:19:16.971: INFO: Observed stateful pod in namespace: statefulset-1203, name: ss-0, uid: 2b0eb952-b18f-4f9c-8684-d3d9994d21f5, status phase: Pending. Waiting for statefulset controller to delete.
Jan 15 16:19:20.987: INFO: Observed stateful pod in namespace: statefulset-1203, name: ss-0, uid: 2b0eb952-b18f-4f9c-8684-d3d9994d21f5, status phase: Failed. Waiting for statefulset controller to delete.
Jan 15 16:19:21.344: INFO: Observed stateful pod in namespace: statefulset-1203, name: ss-0, uid: 2b0eb952-b18f-4f9c-8684-d3d9994d21f5, status phase: Failed. Waiting for statefulset controller to delete.
Jan 15 16:19:21.643: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1203
STEP: Removing pod with conflicting port in namespace statefulset-1203
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1203 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jan 15 16:19:27.246: INFO: Deleting all statefulset in ns statefulset-1203
... skipping 67 lines ...
• [SLOW TEST:9.116 seconds]
[sig-network] Firewall rule
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should have correct firewall rules for e2e cluster
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:197
------------------------------
{"msg":"PASSED [sig-network] Firewall rule should have correct firewall rules for e2e cluster","total":-1,"completed":4,"skipped":23,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:43.742: INFO: Driver gluster doesn't support ntfs -- skipping
... skipping 47 lines ...
• [SLOW TEST:9.110 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:105
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] Certificates API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:15.466 seconds]
[sig-auth] Certificates API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should support building a client with a CSR
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:39
------------------------------
{"msg":"PASSED [sig-auth] Certificates API should support building a client with a CSR","total":-1,"completed":5,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:45.064: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:19:45.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 84 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:525
    should contain last line of the log
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:737
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":48,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:19:34.975: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9288
... skipping 22 lines ...
• [SLOW TEST:10.804 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":48,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:45.796: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 13 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":6,"skipped":26,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:19:34.713: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9792
... skipping 133 lines ...
Jan 15 16:19:35.877: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: cleaning the environment after flex
Jan 15 16:19:36.659: INFO: Deleting pod "flex-client" in namespace "flexvolume-1459"
Jan 15 16:19:36.902: INFO: Wait up to 5m0s for pod "flex-client" to be fully deleted
STEP: waiting for flex client pod to terminate
Jan 15 16:19:45.333: INFO: Waiting up to 5m0s for pod "flex-client" in namespace "flexvolume-1459" to be "terminated due to deadline exceeded"
Jan 15 16:19:45.401: INFO: Pod "flex-client" in namespace "flexvolume-1459" not found. Error: pods "flex-client" not found
STEP: uninstalling flexvolume dummy-flexvolume-1459 from node bootstrap-e2e-minion-group-vrtv
Jan 15 16:19:45.401: INFO: Getting external IP address for bootstrap-e2e-minion-group-vrtv
Jan 15 16:19:45.846: INFO: ssh prow@35.185.202.51:22: command:   sudo rm -r /home/kubernetes/flexvolume/k8s~dummy-flexvolume-1459
Jan 15 16:19:45.846: INFO: ssh prow@35.185.202.51:22: stdout:    ""
Jan 15 16:19:45.846: INFO: ssh prow@35.185.202.51:22: stderr:    ""
Jan 15 16:19:45.846: INFO: ssh prow@35.185.202.51:22: exit code: 0
... skipping 6 lines ...
• [SLOW TEST:26.842 seconds]
[sig-storage] Flexvolumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should be mountable when non-attachable
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:187
------------------------------
{"msg":"PASSED [sig-storage] Flexvolumes should be mountable when non-attachable","total":-1,"completed":11,"skipped":65,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:46.367: INFO: Driver nfs doesn't support ntfs -- skipping
... skipping 98 lines ...
• [SLOW TEST:10.684 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":90,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:46.505: INFO: Only supported for providers [vsphere] (not gce)
... skipping 154 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:50.963: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:19:50.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 124 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directories when readOnly specified in the volumeSource
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:392
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":15,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:51.092: INFO: Only supported for providers [azure] (not gce)
... skipping 13 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:162

      Only supported for providers [azure] (not gce)

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1512
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":8,"skipped":64,"failed":0}
[BeforeEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:19:35.927: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7667
... skipping 22 lines ...
• [SLOW TEST:16.064 seconds]
[sig-node] Downward API
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":64,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:68.611 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support orphan deletion of custom resources
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:973
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":-1,"completed":7,"skipped":22,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:52.906: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 109 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should store data
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":7,"skipped":41,"failed":0}

S
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":75,"failed":0}
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:19:42.721: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-1946
... skipping 12 lines ...
• [SLOW TEST:12.393 seconds]
[sig-storage] EmptyDir wrapper volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":10,"skipped":75,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:55.119: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 54 lines ...
• [SLOW TEST:23.178 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":4,"skipped":18,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:19:56.901: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 73 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":8,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 58 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (default fs)] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should support existing directory
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:203
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:20:00.264: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:20:00.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 131 lines ...
Jan 15 16:19:44.254: INFO: Trying to get logs from node bootstrap-e2e-minion-group-qkcq pod exec-volume-test-inlinevolume-2mmb container exec-container-inlinevolume-2mmb: <nil>
STEP: delete the pod
Jan 15 16:19:45.009: INFO: Waiting for pod exec-volume-test-inlinevolume-2mmb to disappear
Jan 15 16:19:45.077: INFO: Pod exec-volume-test-inlinevolume-2mmb no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-2mmb
Jan 15 16:19:45.077: INFO: Deleting pod "exec-volume-test-inlinevolume-2mmb" in namespace "volume-8412"
Jan 15 16:19:46.855: INFO: error deleting PD "bootstrap-e2e-2429b8be-367b-4e53-8c7c-e6dc641ea8bb": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-2429b8be-367b-4e53-8c7c-e6dc641ea8bb' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-qkcq', resourceInUseByAnotherResource
Jan 15 16:19:46.855: INFO: Couldn't delete PD "bootstrap-e2e-2429b8be-367b-4e53-8c7c-e6dc641ea8bb", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-2429b8be-367b-4e53-8c7c-e6dc641ea8bb' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-qkcq', resourceInUseByAnotherResource
Jan 15 16:19:53.408: INFO: error deleting PD "bootstrap-e2e-2429b8be-367b-4e53-8c7c-e6dc641ea8bb": googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-2429b8be-367b-4e53-8c7c-e6dc641ea8bb' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-qkcq', resourceInUseByAnotherResource
Jan 15 16:19:53.408: INFO: Couldn't delete PD "bootstrap-e2e-2429b8be-367b-4e53-8c7c-e6dc641ea8bb", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-gce-soak-1-5/zones/us-west1-b/disks/bootstrap-e2e-2429b8be-367b-4e53-8c7c-e6dc641ea8bb' is already being used by 'projects/k8s-gce-soak-1-5/zones/us-west1-b/instances/bootstrap-e2e-minion-group-qkcq', resourceInUseByAnotherResource
Jan 15 16:20:00.806: INFO: Successfully deleted PD "bootstrap-e2e-2429b8be-367b-4e53-8c7c-e6dc641ea8bb".
Jan 15 16:20:00.806: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:20:00.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8412" for this suite.
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":10,"skipped":54,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:20:01.256: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 40 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsNonRoot
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:98
    should not run without a specified user ID
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:153
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":11,"skipped":84,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:20:04.702: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 61 lines ...
      Driver nfs doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:153
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":7,"skipped":36,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:19:39.961: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-6347
... skipping 61 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and read from pod1
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":8,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:20:07.679: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:20:07.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 102 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":10,"skipped":56,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:20:09.478: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 52 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:191

      Driver hostPath doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 75 lines ...
• [SLOW TEST:10.515 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:20:10.786: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:20:10.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 131 lines ...
• [SLOW TEST:23.729 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 58 lines ...
• [SLOW TEST:12.518 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":93,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:20:17.241: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 70 lines ...
• [SLOW TEST:9.198 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:56
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":11,"skipped":77,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:20:18.709: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 15 16:20:18.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 67 lines ...
Jan 15 16:18:34.951: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename cronjob
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-4146
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] CronJob
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:56
[It] should delete failed finished jobs with limit of one job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:245
STEP: Creating an AllowConcurrent cronjob with custom history limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
STEP: Destroying namespace "cronjob-4146" for this suite.


• [SLOW TEST:103.973 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete failed finished jobs with limit of one job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:245
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage]","total":-1,"completed":7,"skipped":34,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:20:00.799: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-2098
... skipping 47 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:281

      Disabled temporarily, reopen after #73168 is fixed

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:282
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":27,"failed":0}
[BeforeEach] [sig-storage] Projected combined
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 15 16:20:10.967: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1138
... skipping 22 lines ...
• [SLOW TEST:8.567 seconds]
[sig-storage] Projected combined
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":27,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95
Jan 15 16:20:19.542: INFO: Distro gci doesn't support ntfs -- skipping
... skipping 35 lines ...
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":10,"skipped":75,"failed":0}
[BeforeEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kube




... skipping 47426 lines ...






": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-hvwts\",\n                \"uid\": \"d4e4a46c-d5de-45fd-a093-ad3bf543d230\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"601\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container fluentd-gcp\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:44Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-hvwts.15ea1b5cd7e65490\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-hvwts.15ea1b5cd7e65490\",\n                \"uid\": \"f7a285b8-1b4b-44db-9d43-fe5ddb5dfd50\",\n                \"resourceVersion\": \"304\",\n                \"creationTimestamp\": \"2020-01-15T16:11:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-hvwts\",\n                \"uid\": \"d4e4a46c-d5de-45fd-a093-ad3bf543d230\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"601\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/prometheus-to-sd:v0.5.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:44Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-hvwts.15ea1b5cdc2a996e\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-hvwts.15ea1b5cdc2a996e\",\n                \"uid\": \"d28f58d0-b7e0-420f-a8b3-0e873280a6cb\",\n                \"resourceVersion\": \"305\",\n                \"creationTimestamp\": \"2020-01-15T16:11:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-hvwts\",\n                \"uid\": \"d4e4a46c-d5de-45fd-a093-ad3bf543d230\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"601\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:44Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-hvwts.15ea1b5cedcdb823\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-hvwts.15ea1b5cedcdb823\",\n                \"uid\": \"9b13dd50-f563-423e-bf9e-30d78f2e7103\",\n                \"resourceVersion\": \"306\",\n                \"creationTimestamp\": \"2020-01-15T16:11:45Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-hvwts\",\n                \"uid\": \"d4e4a46c-d5de-45fd-a093-ad3bf543d230\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"601\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:45Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:45Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-hvwts.15ea1b5f6153a628\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-hvwts.15ea1b5f6153a628\",\n                \"uid\": \"2c4c1d2c-6942-489f-a046-74527c91ff1e\",\n                \"resourceVersion\": \"340\",\n                \"creationTimestamp\": \"2020-01-15T16:11:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-hvwts\",\n                \"uid\": \"d4e4a46c-d5de-45fd-a093-ad3bf543d230\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"601\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Killing\",\n            \"message\": \"Stopping container fluentd-gcp\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:55Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-hvwts.15ea1b5f616e36b9\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-hvwts.15ea1b5f616e36b9\",\n                \"uid\": \"116e345c-fc6a-490f-9ce3-8d3d37f1c675\",\n                \"resourceVersion\": \"341\",\n                \"creationTimestamp\": \"2020-01-15T16:11:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-hvwts\",\n                \"uid\": \"d4e4a46c-d5de-45fd-a093-ad3bf543d230\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"601\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Killing\",\n            \"message\": \"Stopping container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:55Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z.15ea1b588c3082db\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-m4h9z.15ea1b588c3082db\",\n                \"uid\": \"2b27ac73-0a25-4f49-99bf-97e850328255\",\n                \"resourceVersion\": \"80\",\n                \"creationTimestamp\": \"2020-01-15T16:11:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z\",\n                \"uid\": \"3850799a-5498-41e5-9094-e64940929c0b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"513\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/fluentd-gcp-v3.2.0-m4h9z to bootstrap-e2e-master\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:26Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z.15ea1b5aaad080b5\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-m4h9z.15ea1b5aaad080b5\",\n                \"uid\": \"e43e9f9d-56f9-4daf-97c0-9d60fcd6095c\",\n                \"resourceVersion\": \"248\",\n                \"creationTimestamp\": \"2020-01-15T16:11:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z\",\n                \"uid\": \"3850799a-5498-41e5-9094-e64940929c0b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"527\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:35Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z.15ea1b5df081f4b3\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-m4h9z.15ea1b5df081f4b3\",\n                \"uid\": \"efbc4039-53eb-4d7d-ace0-986d2c8e98e9\",\n                \"resourceVersion\": \"314\",\n                \"creationTimestamp\": \"2020-01-15T16:11:49Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z\",\n                \"uid\": \"3850799a-5498-41e5-9094-e64940929c0b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"527\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:49Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:49Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z.15ea1b5ec33218e5\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-m4h9z.15ea1b5ec33218e5\",\n                \"uid\": \"c4c84c72-21e0-458e-9dff-f279094fc3c0\",\n                \"resourceVersion\": \"322\",\n                \"creationTimestamp\": \"2020-01-15T16:11:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z\",\n                \"uid\": \"3850799a-5498-41e5-9094-e64940929c0b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"527\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container fluentd-gcp\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:53Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z.15ea1b5ee01a736a\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-m4h9z.15ea1b5ee01a736a\",\n                \"uid\": \"ab358e99-882a-4d1f-b2fb-aef2aafdc6f1\",\n                \"resourceVersion\": \"329\",\n                \"creationTimestamp\": \"2020-01-15T16:11:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z\",\n                \"uid\": \"3850799a-5498-41e5-9094-e64940929c0b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"527\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container fluentd-gcp\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:53Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z.15ea1b5ee04fd3ae\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-m4h9z.15ea1b5ee04fd3ae\",\n                \"uid\": \"feab2bec-2e3d-459d-8efb-eb19dcdaf0f9\",\n                \"resourceVersion\": \"330\",\n                \"creationTimestamp\": \"2020-01-15T16:11:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z\",\n                \"uid\": \"3850799a-5498-41e5-9094-e64940929c0b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"527\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/prometheus-to-sd:v0.5.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:53Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z.15ea1b5ee78eaf71\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-m4h9z.15ea1b5ee78eaf71\",\n                \"uid\": \"3bf0d74e-933b-49ca-a90d-164625a31bc3\",\n                \"resourceVersion\": \"331\",\n                \"creationTimestamp\": \"2020-01-15T16:11:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z\",\n                \"uid\": \"3850799a-5498-41e5-9094-e64940929c0b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"527\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:53Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z.15ea1b5f06460afc\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-m4h9z.15ea1b5f06460afc\",\n                \"uid\": \"566606f2-8bca-46d2-ad31-fb2cfa429538\",\n                \"resourceVersion\": \"335\",\n                \"creationTimestamp\": \"2020-01-15T16:11:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z\",\n                \"uid\": \"3850799a-5498-41e5-9094-e64940929c0b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"527\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:54Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:54Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z.15ea1b67b939415e\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-m4h9z.15ea1b67b939415e\",\n                \"uid\": \"0a72d16b-e5a0-4f60-bb0b-b7f9af5983c7\",\n                \"resourceVersion\": \"418\",\n                \"creationTimestamp\": \"2020-01-15T16:12:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z\",\n                \"uid\": \"3850799a-5498-41e5-9094-e64940929c0b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"527\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Killing\",\n            \"message\": \"Stopping container fluentd-gcp\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:31Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z.15ea1b67b93dfcab\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-m4h9z.15ea1b67b93dfcab\",\n                \"uid\": \"91ce8881-5292-4d5f-af03-430f4595c23b\",\n                \"resourceVersion\": \"416\",\n                \"creationTimestamp\": \"2020-01-15T16:12:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-m4h9z\",\n                \"uid\": \"3850799a-5498-41e5-9094-e64940929c0b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"527\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Killing\",\n            \"message\": \"Stopping container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:31Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn.15ea1b59befa82c3\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-mw4rn.15ea1b59befa82c3\",\n                \"uid\": \"85f32fb0-3881-44bf-a7f0-10f93e4d72c5\",\n                \"resourceVersion\": \"161\",\n                \"creationTimestamp\": \"2020-01-15T16:11:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn\",\n                \"uid\": \"b2f5ed91-bade-4c46-8a55-3a8b98ac5cc4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"620\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/fluentd-gcp-v3.2.0-mw4rn to bootstrap-e2e-minion-group-vrtv\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn.15ea1b59e82dae3a\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-mw4rn.15ea1b59e82dae3a\",\n                \"uid\": \"dbbd15c2-c87e-4854-a906-3f2ede8681fe\",\n                \"resourceVersion\": \"203\",\n                \"creationTimestamp\": \"2020-01-15T16:11:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn\",\n                \"uid\": \"b2f5ed91-bade-4c46-8a55-3a8b98ac5cc4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"637\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-vrtv\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:32Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn.15ea1b5c37a3f118\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-mw4rn.15ea1b5c37a3f118\",\n                \"uid\": \"24ec3082-2b6e-4402-ae06-aabf919cecb0\",\n                \"resourceVersion\": \"279\",\n                \"creationTimestamp\": \"2020-01-15T16:11:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn\",\n                \"uid\": \"b2f5ed91-bade-4c46-8a55-3a8b98ac5cc4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"637\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-vrtv\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:42Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn.15ea1b5c3ed50622\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-mw4rn.15ea1b5c3ed50622\",\n                \"uid\": \"e1991df4-1c46-46bf-8ef6-120fa2de9a6a\",\n                \"resourceVersion\": \"280\",\n                \"creationTimestamp\": \"2020-01-15T16:11:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn\",\n                \"uid\": \"b2f5ed91-bade-4c46-8a55-3a8b98ac5cc4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"637\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container fluentd-gcp\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-vrtv\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:42Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn.15ea1b5c4b134bc4\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-mw4rn.15ea1b5c4b134bc4\",\n                \"uid\": \"e821d9d9-246d-4e2f-a003-8927ffd590f1\",\n                \"resourceVersion\": \"281\",\n                \"creationTimestamp\": \"2020-01-15T16:11:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn\",\n                \"uid\": \"b2f5ed91-bade-4c46-8a55-3a8b98ac5cc4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"637\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container fluentd-gcp\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-vrtv\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:42Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn.15ea1b5c4b31831f\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-mw4rn.15ea1b5c4b31831f\",\n                \"uid\": \"e9c02c46-c12d-4db1-b901-09a62c430079\",\n                \"resourceVersion\": \"282\",\n                \"creationTimestamp\": \"2020-01-15T16:11:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn\",\n                \"uid\": \"b2f5ed91-bade-4c46-8a55-3a8b98ac5cc4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"637\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/prometheus-to-sd:v0.5.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-vrtv\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:42Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn.15ea1b5c4e90cbdc\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-mw4rn.15ea1b5c4e90cbdc\",\n                \"uid\": \"525ceb7f-7488-4bd3-a1b6-8968fbc071d6\",\n                \"resourceVersion\": \"283\",\n                \"creationTimestamp\": \"2020-01-15T16:11:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn\",\n                \"uid\": \"b2f5ed91-bade-4c46-8a55-3a8b98ac5cc4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"637\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-vrtv\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:42Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn.15ea1b5c5c875296\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-mw4rn.15ea1b5c5c875296\",\n                \"uid\": \"3047e314-fcfa-4746-b921-b8b41dffdc92\",\n                \"resourceVersion\": \"285\",\n                \"creationTimestamp\": \"2020-01-15T16:11:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn\",\n                \"uid\": \"b2f5ed91-bade-4c46-8a55-3a8b98ac5cc4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"637\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-vrtv\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:42Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn.15ea1b622a588d53\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-mw4rn.15ea1b622a588d53\",\n                \"uid\": \"33fed705-71ae-4f95-add5-2286f695b98c\",\n                \"resourceVersion\": \"396\",\n                \"creationTimestamp\": \"2020-01-15T16:12:07Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn\",\n                \"uid\": \"b2f5ed91-bade-4c46-8a55-3a8b98ac5cc4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"637\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Killing\",\n            \"message\": \"Stopping container fluentd-gcp\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-vrtv\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:07Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:07Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn.15ea1b622a5d1f48\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-mw4rn.15ea1b622a5d1f48\",\n                \"uid\": \"402d849d-5383-4c27-8014-eede6b4e1d1a\",\n                \"resourceVersion\": \"394\",\n                \"creationTimestamp\": \"2020-01-15T16:12:07Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-mw4rn\",\n                \"uid\": \"b2f5ed91-bade-4c46-8a55-3a8b98ac5cc4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"637\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Killing\",\n            \"message\": \"Stopping container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-vrtv\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:07Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:07Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-mxnmk.15ea1b64ff5c18ff\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-mxnmk.15ea1b64ff5c18ff\",\n                \"uid\": \"920dbf3d-902f-4dc0-a477-d99718fbc3c5\",\n                \"resourceVersion\": \"398\",\n                \"creationTimestamp\": \"2020-01-15T16:12:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-mxnmk\",\n                \"uid\": \"ad5ca439-034f-418f-b015-1ea41c6b569b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"955\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/fluentd-gcp-v3.2.0-mxnmk to bootstrap-e2e-minion-group-vrtv\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:19Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:19Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-mxnmk.15ea1b65262e0bb2\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-mxnmk.15ea1b65262e0bb2\",\n                \"uid\": \"5c4953be-7eb3-426f-b027-4568ece792ee\",\n                \"resourceVersion\": \"399\",\n                \"creationTimestamp\": \"2020-01-15T16:12:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-mxnmk\",\n                \"uid\": \"ad5ca439-034f-418f-b015-1ea41c6b569b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"957\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-vrtv\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:20Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:20Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-mxnmk.15ea1b65294eb681\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-mxnmk.15ea1b65294eb681\",\n                \"uid\": \"93b72ac4-7a76-44d8-b532-4377bcb511ff\",\n                \"resourceVersion\": \"400\",\n                \"creationTimestamp\": \"2020-01-15T16:12:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-mxnmk\",\n                \"uid\": \"ad5ca439-034f-418f-b015-1ea41c6b569b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"957\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container fluentd-gcp\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-vrtv\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:20Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:20Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-mxnmk.15ea1b6536c75726\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-mxnmk.15ea1b6536c75726\",\n                \"uid\": \"090aec3d-29b5-48d3-a11a-ad5114514d0a\",\n                \"resourceVersion\": \"401\",\n                \"creationTimestamp\": \"2020-01-15T16:12:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-mxnmk\",\n                \"uid\": \"ad5ca439-034f-418f-b015-1ea41c6b569b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"957\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container fluentd-gcp\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-vrtv\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:20Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:20Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-mxnmk.15ea1b65372aa54a\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-mxnmk.15ea1b65372aa54a\",\n                \"uid\": \"344bff34-d0fa-40db-aee0-6a57c80ab4f2\",\n                \"resourceVersion\": \"402\",\n                \"creationTimestamp\": \"2020-01-15T16:12:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-mxnmk\",\n                \"uid\": \"ad5ca439-034f-418f-b015-1ea41c6b569b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"957\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/prometheus-to-sd:v0.5.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-vrtv\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:20Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:20Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-mxnmk.15ea1b653b6d9200\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-mxnmk.15ea1b653b6d9200\",\n                \"uid\": \"103e3a13-20a0-4218-9934-30d1d231351c\",\n                \"resourceVersion\": \"403\",\n                \"creationTimestamp\": \"2020-01-15T16:12:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-mxnmk\",\n                \"uid\": \"ad5ca439-034f-418f-b015-1ea41c6b569b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"957\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-vrtv\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:20Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:20Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-mxnmk.15ea1b654a718227\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-mxnmk.15ea1b654a718227\",\n                \"uid\": \"16c34d48-8bc9-4c92-91fb-eb05328ec9c6\",\n                \"resourceVersion\": \"404\",\n                \"creationTimestamp\": \"2020-01-15T16:12:21Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-mxnmk\",\n                \"uid\": \"ad5ca439-034f-418f-b015-1ea41c6b569b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"957\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-vrtv\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:21Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2.15ea1b5962ebf357\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-pfpj2.15ea1b5962ebf357\",\n                \"uid\": \"aa64f957-2d28-45f8-be42-9cf6652ba563\",\n                \"resourceVersion\": \"114\",\n                \"creationTimestamp\": \"2020-01-15T16:11:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2\",\n                \"uid\": \"870dd56a-c56f-41de-8af0-171de1dc699d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"564\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/fluentd-gcp-v3.2.0-pfpj2 to bootstrap-e2e-minion-group-qn53\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2.15ea1b59aace2f8c\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-pfpj2.15ea1b59aace2f8c\",\n                \"uid\": \"68a5e524-2cb3-4827-996e-e2268d13bdf3\",\n                \"resourceVersion\": \"168\",\n                \"creationTimestamp\": \"2020-01-15T16:11:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2\",\n                \"uid\": \"870dd56a-c56f-41de-8af0-171de1dc699d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"588\"\n            },\n            \"reason\": \"FailedMount\",\n            \"message\": \"MountVolume.SetUp failed for volume \\\"fluentd-gcp-token-5vkfw\\\" : failed to sync secret cache: timed out waiting for the condition\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2.15ea1b59aad04cce\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-pfpj2.15ea1b59aad04cce\",\n                \"uid\": \"91a9186c-c685-4805-a9d4-0b90b91e23e2\",\n                \"resourceVersion\": \"162\",\n                \"creationTimestamp\": \"2020-01-15T16:11:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2\",\n                \"uid\": \"870dd56a-c56f-41de-8af0-171de1dc699d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"588\"\n            },\n            \"reason\": \"FailedMount\",\n            \"message\": \"MountVolume.SetUp failed for volume \\\"config-volume\\\" : failed to sync configmap cache: timed out waiting for the condition\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2.15ea1b59edce27a4\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-pfpj2.15ea1b59edce27a4\",\n                \"uid\": \"31953552-ce72-40dc-8ac9-fd0919175148\",\n                \"resourceVersion\": \"178\",\n                \"creationTimestamp\": \"2020-01-15T16:11:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2\",\n                \"uid\": \"870dd56a-c56f-41de-8af0-171de1dc699d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"588\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:32Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2.15ea1b5c1e32341c\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-pfpj2.15ea1b5c1e32341c\",\n                \"uid\": \"e92ed6a4-8630-4811-802d-72f339ac3ba5\",\n                \"resourceVersion\": \"276\",\n                \"creationTimestamp\": \"2020-01-15T16:11:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2\",\n                \"uid\": \"870dd56a-c56f-41de-8af0-171de1dc699d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"588\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:41Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:41Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2.15ea1b5c90c91b8b\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-pfpj2.15ea1b5c90c91b8b\",\n                \"uid\": \"4d85812d-9a87-4d5c-813e-318a1f0a7072\",\n                \"resourceVersion\": \"291\",\n                \"creationTimestamp\": \"2020-01-15T16:11:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2\",\n                \"uid\": \"870dd56a-c56f-41de-8af0-171de1dc699d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"588\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container fluentd-gcp\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:43Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2.15ea1b5c9c7a33d5\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-pfpj2.15ea1b5c9c7a33d5\",\n                \"uid\": \"95a3ec9f-a010-44f1-8ff6-0c087020f3df\",\n                \"resourceVersion\": \"294\",\n                \"creationTimestamp\": \"2020-01-15T16:11:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2\",\n                \"uid\": \"870dd56a-c56f-41de-8af0-171de1dc699d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"588\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container fluentd-gcp\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:43Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2.15ea1b5c9cbde38a\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-pfpj2.15ea1b5c9cbde38a\",\n                \"uid\": \"866309c4-6f3f-4c40-9475-4b391037e273\",\n                \"resourceVersion\": \"296\",\n                \"creationTimestamp\": \"2020-01-15T16:11:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2\",\n                \"uid\": \"870dd56a-c56f-41de-8af0-171de1dc699d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"588\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/prometheus-to-sd:v0.5.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:43Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2.15ea1b5ca27be175\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-pfpj2.15ea1b5ca27be175\",\n                \"uid\": \"9d8e148d-6514-461a-8e10-79db1491036c\",\n                \"resourceVersion\": \"298\",\n                \"creationTimestamp\": \"2020-01-15T16:11:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2\",\n                \"uid\": \"870dd56a-c56f-41de-8af0-171de1dc699d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"588\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:44Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2.15ea1b5cb0117dea\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-pfpj2.15ea1b5cb0117dea\",\n                \"uid\": \"d0142984-acb1-4796-b69d-98854002e5f8\",\n                \"resourceVersion\": \"301\",\n                \"creationTimestamp\": \"2020-01-15T16:11:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2\",\n                \"uid\": \"870dd56a-c56f-41de-8af0-171de1dc699d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"588\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:44Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2.15ea1b6ace0a2d0a\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-pfpj2.15ea1b6ace0a2d0a\",\n                \"uid\": \"c21e6dc0-92d2-4c33-a079-e2bd84cbd8a1\",\n                \"resourceVersion\": \"429\",\n                \"creationTimestamp\": \"2020-01-15T16:12:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2\",\n                \"uid\": \"870dd56a-c56f-41de-8af0-171de1dc699d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"588\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Killing\",\n            \"message\": \"Stopping container fluentd-gcp\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:44Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2.15ea1b6ace0c8515\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-pfpj2.15ea1b6ace0c8515\",\n                \"uid\": \"55934df7-aaf6-4c55-ad75-9782a41e55f4\",\n                \"resourceVersion\": \"427\",\n                \"creationTimestamp\": \"2020-01-15T16:12:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-pfpj2\",\n                \"uid\": \"870dd56a-c56f-41de-8af0-171de1dc699d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"588\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Killing\",\n            \"message\": \"Stopping container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:44Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-t6mk4.15ea1b6e1b3d2bf2\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-t6mk4.15ea1b6e1b3d2bf2\",\n                \"uid\": \"84133720-449f-4977-8c96-da0a6a7aa22a\",\n                \"resourceVersion\": \"431\",\n                \"creationTimestamp\": \"2020-01-15T16:12:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-t6mk4\",\n                \"uid\": \"c8e9c144-1258-4ecc-90c4-2ca8bd265b10\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"1115\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/fluentd-gcp-v3.2.0-t6mk4 to bootstrap-e2e-minion-group-qn53\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:59Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:59Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-t6mk4.15ea1b6e40cde50f\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-t6mk4.15ea1b6e40cde50f\",\n                \"uid\": \"1bcd362e-a9af-4155-9d8e-8284fca20925\",\n                \"resourceVersion\": \"432\",\n                \"creationTimestamp\": \"2020-01-15T16:12:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-t6mk4\",\n                \"uid\": \"c8e9c144-1258-4ecc-90c4-2ca8bd265b10\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"1117\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:59Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:59Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-t6mk4.15ea1b6e447781b8\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-t6mk4.15ea1b6e447781b8\",\n                \"uid\": \"3a75e12d-5964-488f-a97f-065aecc0314c\",\n                \"resourceVersion\": \"433\",\n                \"creationTimestamp\": \"2020-01-15T16:12:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-t6mk4\",\n                \"uid\": \"c8e9c144-1258-4ecc-90c4-2ca8bd265b10\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"1117\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container fluentd-gcp\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:59Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:59Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-t6mk4.15ea1b6e51eaca21\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-t6mk4.15ea1b6e51eaca21\",\n                \"uid\": \"8e15f6ed-6f30-47aa-9b0e-5c490efee080\",\n                \"resourceVersion\": \"434\",\n                \"creationTimestamp\": \"2020-01-15T16:12:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-t6mk4\",\n                \"uid\": \"c8e9c144-1258-4ecc-90c4-2ca8bd265b10\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"1117\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container fluentd-gcp\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:59Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:59Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-t6mk4.15ea1b6e526acbb5\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-t6mk4.15ea1b6e526acbb5\",\n                \"uid\": \"3e6b014c-78dd-4914-97e9-a01bfa52feb2\",\n                \"resourceVersion\": \"435\",\n                \"creationTimestamp\": \"2020-01-15T16:12:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-t6mk4\",\n                \"uid\": \"c8e9c144-1258-4ecc-90c4-2ca8bd265b10\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"1117\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/prometheus-to-sd:v0.5.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:59Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:59Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-t6mk4.15ea1b6e57b77174\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-t6mk4.15ea1b6e57b77174\",\n                \"uid\": \"e38221e9-b8d2-430f-956c-7446fb448160\",\n                \"resourceVersion\": \"436\",\n                \"creationTimestamp\": \"2020-01-15T16:13:00Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-t6mk4\",\n                \"uid\": \"c8e9c144-1258-4ecc-90c4-2ca8bd265b10\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"1117\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:13:00Z\",\n            \"lastTimestamp\": \"2020-01-15T16:13:00Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-t6mk4.15ea1b6ebd62021d\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-t6mk4.15ea1b6ebd62021d\",\n                \"uid\": \"047814ee-b674-4c68-b851-3a32ac67bf29\",\n                \"resourceVersion\": \"437\",\n                \"creationTimestamp\": \"2020-01-15T16:13:01Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-t6mk4\",\n                \"uid\": \"c8e9c144-1258-4ecc-90c4-2ca8bd265b10\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"1117\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:13:01Z\",\n            \"lastTimestamp\": \"2020-01-15T16:13:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-vqmcb.15ea1b61a118ddef\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-vqmcb.15ea1b61a118ddef\",\n                \"uid\": \"79e701ea-f139-4b05-9306-d4a4551767be\",\n                \"resourceVersion\": \"381\",\n                \"creationTimestamp\": \"2020-01-15T16:12:05Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-vqmcb\",\n                \"uid\": \"a0afe40c-fc98-4a2a-90aa-17e98f6fdf24\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"873\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/fluentd-gcp-v3.2.0-vqmcb to bootstrap-e2e-minion-group-q10p\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:05Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-vqmcb.15ea1b61cb7f62ea\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-vqmcb.15ea1b61cb7f62ea\",\n                \"uid\": \"e544a0d7-c2bb-4710-acf3-5a54fc34f859\",\n                \"resourceVersion\": \"384\",\n                \"creationTimestamp\": \"2020-01-15T16:12:06Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-vqmcb\",\n                \"uid\": \"a0afe40c-fc98-4a2a-90aa-17e98f6fdf24\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"880\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:06Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:06Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-vqmcb.15ea1b61d197a8f2\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-vqmcb.15ea1b61d197a8f2\",\n                \"uid\": \"ddbdbcb5-119e-4621-aa56-4890694abd9d\",\n                \"resourceVersion\": \"385\",\n                \"creationTimestamp\": \"2020-01-15T16:12:06Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-vqmcb\",\n                \"uid\": \"a0afe40c-fc98-4a2a-90aa-17e98f6fdf24\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"880\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container fluentd-gcp\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:06Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:06Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-vqmcb.15ea1b61e88c6c2f\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-vqmcb.15ea1b61e88c6c2f\",\n                \"uid\": \"97f2474a-67bf-41b9-8b8e-ffed37e40974\",\n                \"resourceVersion\": \"390\",\n                \"creationTimestamp\": \"2020-01-15T16:12:06Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-vqmcb\",\n                \"uid\": \"a0afe40c-fc98-4a2a-90aa-17e98f6fdf24\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"880\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container fluentd-gcp\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:06Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:06Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-vqmcb.15ea1b61e9164eb8\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-vqmcb.15ea1b61e9164eb8\",\n                \"uid\": \"3d67aa82-ec82-4b64-afe8-960819ca8c39\",\n                \"resourceVersion\": \"391\",\n                \"creationTimestamp\": \"2020-01-15T16:12:06Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-vqmcb\",\n                \"uid\": \"a0afe40c-fc98-4a2a-90aa-17e98f6fdf24\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"880\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/prometheus-to-sd:v0.5.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:06Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:06Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-vqmcb.15ea1b61f058068d\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-vqmcb.15ea1b61f058068d\",\n                \"uid\": \"9a9655c9-2124-41b1-a7c6-fa0f992deaa9\",\n                \"resourceVersion\": \"392\",\n                \"creationTimestamp\": \"2020-01-15T16:12:06Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-vqmcb\",\n                \"uid\": \"a0afe40c-fc98-4a2a-90aa-17e98f6fdf24\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"880\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:06Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:06Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-vqmcb.15ea1b6200614c2e\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-vqmcb.15ea1b6200614c2e\",\n                \"uid\": \"93ac3e7d-58e4-4567-8056-b295f11bddfa\",\n                \"resourceVersion\": \"393\",\n                \"creationTimestamp\": \"2020-01-15T16:12:07Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-vqmcb\",\n                \"uid\": \"a0afe40c-fc98-4a2a-90aa-17e98f6fdf24\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"880\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:07Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:07Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-zcg6h.15ea1b675603e24b\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-zcg6h.15ea1b675603e24b\",\n                \"uid\": \"c36b4f0d-4dcd-4c49-894b-edae32558e11\",\n                \"resourceVersion\": \"409\",\n                \"creationTimestamp\": \"2020-01-15T16:12:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-zcg6h\",\n                \"uid\": \"090b8730-0653-4ec1-84e3-88411a3b1fb1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"1001\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/fluentd-gcp-v3.2.0-zcg6h to bootstrap-e2e-minion-group-qkcq\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:29Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-zcg6h.15ea1b677d679660\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-zcg6h.15ea1b677d679660\",\n                \"uid\": \"4d416ea0-b8bb-4f92-bf0e-cb3d2cb124fd\",\n                \"resourceVersion\": \"410\",\n                \"creationTimestamp\": \"2020-01-15T16:12:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-zcg6h\",\n                \"uid\": \"090b8730-0653-4ec1-84e3-88411a3b1fb1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"1003\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:30Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-zcg6h.15ea1b67808e906b\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-zcg6h.15ea1b67808e906b\",\n                \"uid\": \"b4309bda-c48e-4262-8872-84ef3d12ff64\",\n                \"resourceVersion\": \"411\",\n                \"creationTimestamp\": \"2020-01-15T16:12:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-zcg6h\",\n                \"uid\": \"090b8730-0653-4ec1-84e3-88411a3b1fb1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"1003\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container fluentd-gcp\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:30Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-zcg6h.15ea1b678bd0dc03\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-zcg6h.15ea1b678bd0dc03\",\n                \"uid\": \"3a1b2cbe-982d-4250-8978-6c92cf3e425c\",\n                \"resourceVersion\": \"412\",\n                \"creationTimestamp\": \"2020-01-15T16:12:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-zcg6h\",\n                \"uid\": \"090b8730-0653-4ec1-84e3-88411a3b1fb1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"1003\",\n                \"fieldPath\": \"spec.containers{fluentd-gcp}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container fluentd-gcp\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:30Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-zcg6h.15ea1b678c8f8fe7\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-zcg6h.15ea1b678c8f8fe7\",\n                \"uid\": \"944f04e6-b8d0-4800-99e5-0da6ca07bf46\",\n                \"resourceVersion\": \"413\",\n                \"creationTimestamp\": \"2020-01-15T16:12:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-zcg6h\",\n                \"uid\": \"090b8730-0653-4ec1-84e3-88411a3b1fb1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"1003\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/prometheus-to-sd:v0.5.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:30Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-zcg6h.15ea1b67917e0ee6\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-zcg6h.15ea1b67917e0ee6\",\n                \"uid\": \"28e52a8b-a21e-468c-b44b-d337f38b37b4\",\n                \"resourceVersion\": \"414\",\n                \"creationTimestamp\": \"2020-01-15T16:12:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-zcg6h\",\n                \"uid\": \"090b8730-0653-4ec1-84e3-88411a3b1fb1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"1003\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:30Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0-zcg6h.15ea1b67a20c1fd5\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0-zcg6h.15ea1b67a20c1fd5\",\n                \"uid\": \"7e48780e-5577-4f35-83f3-effe78868cca\",\n                \"resourceVersion\": \"415\",\n                \"creationTimestamp\": \"2020-01-15T16:12:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0-zcg6h\",\n                \"uid\": \"090b8730-0653-4ec1-84e3-88411a3b1fb1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"1003\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:31Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0.15ea1b586d50959c\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0.15ea1b586d50959c\",\n                \"uid\": \"fcf06ef6-f728-4bd8-b507-b7a7df358c12\",\n                \"resourceVersion\": \"77\",\n                \"creationTimestamp\": \"2020-01-15T16:11:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0\",\n                \"uid\": \"e6adcaaf-f903-4739-a670-e2bb779dd406\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"343\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: fluentd-gcp-v3.2.0-m4h9z\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:25Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:25Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0.15ea1b594491e02e\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0.15ea1b594491e02e\",\n                \"uid\": \"d996e637-34ae-4f1c-b1ab-a95522a4f693\",\n                \"resourceVersion\": \"106\",\n                \"creationTimestamp\": \"2020-01-15T16:11:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0\",\n                \"uid\": \"e6adcaaf-f903-4739-a670-e2bb779dd406\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"522\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: fluentd-gcp-v3.2.0-pfpj2\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:29Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0.15ea1b5958c0d37e\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0.15ea1b5958c0d37e\",\n                \"uid\": \"938c6a6a-24b8-4c2d-ba38-0ddd48426a4b\",\n                \"resourceVersion\": \"115\",\n                \"creationTimestamp\": \"2020-01-15T16:11:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0\",\n                \"uid\": \"e6adcaaf-f903-4739-a670-e2bb779dd406\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"522\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: fluentd-gcp-v3.2.0-hvwts\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:29Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0.15ea1b59826a3a84\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0.15ea1b59826a3a84\",\n                \"uid\": \"9259ee00-dbb3-49a2-9076-1aadfccf26e2\",\n                \"resourceVersion\": \"122\",\n                \"creationTimestamp\": \"2020-01-15T16:11:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0\",\n                \"uid\": \"e6adcaaf-f903-4739-a670-e2bb779dd406\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"576\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: fluentd-gcp-v3.2.0-g8wd5\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0.15ea1b599b9c8e3c\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0.15ea1b599b9c8e3c\",\n                \"uid\": \"676f400b-9d8c-426c-a0ae-d4bb6484e587\",\n                \"resourceVersion\": \"133\",\n                \"creationTimestamp\": \"2020-01-15T16:11:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0\",\n                \"uid\": \"e6adcaaf-f903-4739-a670-e2bb779dd406\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"576\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: fluentd-gcp-v3.2.0-mw4rn\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0.15ea1b5f56146950\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0.15ea1b5f56146950\",\n                \"uid\": \"7beb89a9-8841-491a-9d6a-76a44fc2d9f9\",\n                \"resourceVersion\": \"339\",\n                \"creationTimestamp\": \"2020-01-15T16:11:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0\",\n                \"uid\": \"e6adcaaf-f903-4739-a670-e2bb779dd406\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"801\"\n            },\n            \"reason\": \"SuccessfulDelete\",\n            \"message\": \"Deleted pod: fluentd-gcp-v3.2.0-hvwts\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:55Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0.15ea1b619cea7126\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0.15ea1b619cea7126\",\n                \"uid\": \"2ad2e4da-5b0a-4f2f-be55-3b6edc973989\",\n                \"resourceVersion\": \"379\",\n                \"creationTimestamp\": \"2020-01-15T16:12:05Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0\",\n                \"uid\": \"e6adcaaf-f903-4739-a670-e2bb779dd406\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"866\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: fluentd-gcp-v3.2.0-vqmcb\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:05Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0.15ea1b62283c9553\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0.15ea1b62283c9553\",\n                \"uid\": \"b1711995-24f9-44e1-aed2-f21b0af9fd3f\",\n                \"resourceVersion\": \"395\",\n                \"creationTimestamp\": \"2020-01-15T16:12:07Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0\",\n                \"uid\": \"e6adcaaf-f903-4739-a670-e2bb779dd406\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"876\"\n            },\n            \"reason\": \"SuccessfulDelete\",\n            \"message\": \"Deleted pod: fluentd-gcp-v3.2.0-mw4rn\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:07Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:07Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0.15ea1b64fd072c11\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0.15ea1b64fd072c11\",\n                \"uid\": \"b32bfbda-ea0d-4937-9c46-1184c2f63a4f\",\n                \"resourceVersion\": \"397\",\n                \"creationTimestamp\": \"2020-01-15T16:12:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0\",\n                \"uid\": \"e6adcaaf-f903-4739-a670-e2bb779dd406\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"922\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: fluentd-gcp-v3.2.0-mxnmk\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:19Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:19Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0.15ea1b65730a201d\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0.15ea1b65730a201d\",\n                \"uid\": \"9ce39457-759e-4f99-9513-556d9627ac22\",\n                \"resourceVersion\": \"405\",\n                \"creationTimestamp\": \"2020-01-15T16:12:21Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0\",\n                \"uid\": \"e6adcaaf-f903-4739-a670-e2bb779dd406\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"956\"\n            },\n            \"reason\": \"SuccessfulDelete\",\n            \"message\": \"Deleted pod: fluentd-gcp-v3.2.0-g8wd5\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:21Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0.15ea1b6754aa8385\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0.15ea1b6754aa8385\",\n                \"uid\": \"1208cc3b-8684-4898-8c8d-22bc9f694293\",\n                \"resourceVersion\": \"408\",\n                \"creationTimestamp\": \"2020-01-15T16:12:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0\",\n                \"uid\": \"e6adcaaf-f903-4739-a670-e2bb779dd406\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"981\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: fluentd-gcp-v3.2.0-zcg6h\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:29Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0.15ea1b67b97d4dc2\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0.15ea1b67b97d4dc2\",\n                \"uid\": \"47e32c9b-61e1-4d39-b4b9-1473c6526633\",\n                \"resourceVersion\": \"417\",\n                \"creationTimestamp\": \"2020-01-15T16:12:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0\",\n                \"uid\": \"e6adcaaf-f903-4739-a670-e2bb779dd406\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"1004\"\n            },\n            \"reason\": \"SuccessfulDelete\",\n            \"message\": \"Deleted pod: fluentd-gcp-v3.2.0-m4h9z\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:31Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0.15ea1b69354eeeb9\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0.15ea1b69354eeeb9\",\n                \"uid\": \"45fe7621-6b95-437c-adea-ced5e933bb49\",\n                \"resourceVersion\": \"419\",\n                \"creationTimestamp\": \"2020-01-15T16:12:38Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0\",\n                \"uid\": \"e6adcaaf-f903-4739-a670-e2bb779dd406\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"1035\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: fluentd-gcp-v3.2.0-6tspz\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:38Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0.15ea1b6acbfe1126\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0.15ea1b6acbfe1126\",\n                \"uid\": \"c3667cf3-a234-457d-9569-bec414e5c507\",\n                \"resourceVersion\": \"428\",\n                \"creationTimestamp\": \"2020-01-15T16:12:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0\",\n                \"uid\": \"e6adcaaf-f903-4739-a670-e2bb779dd406\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"1043\"\n            },\n            \"reason\": \"SuccessfulDelete\",\n            \"message\": \"Deleted pod: fluentd-gcp-v3.2.0-pfpj2\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:44Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"fluentd-gcp-v3.2.0.15ea1b6e19aa3d46\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/fluentd-gcp-v3.2.0.15ea1b6e19aa3d46\",\n                \"uid\": \"0ed9f8c1-1415-4462-b98d-ed73fca60b25\",\n                \"resourceVersion\": \"430\",\n                \"creationTimestamp\": \"2020-01-15T16:12:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"fluentd-gcp-v3.2.0\",\n                \"uid\": \"e6adcaaf-f903-4739-a670-e2bb779dd406\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"1078\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"(combined from similar events): Created pod: fluentd-gcp-v3.2.0-t6mk4\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:59Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:59Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ingress-gce-lock.15ea1b5ae3a3d49d\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/ingress-gce-lock.15ea1b5ae3a3d49d\",\n                \"uid\": \"771d21b7-e4e1-4e81-9a67-c07a19c7c841\",\n                \"resourceVersion\": \"253\",\n                \"creationTimestamp\": \"2020-01-15T16:11:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ConfigMap\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"ingress-gce-lock\",\n                \"uid\": \"d7b79b25-f1dc-4fc5-a84c-ce54b110775b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"678\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"bootstrap-e2e-master_81ba0 became leader\",\n            \"source\": {\n                \"component\": \"loadbalancer-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:36Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager.15ea1b516d3c625e\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-controller-manager.15ea1b516d3c625e\",\n                \"uid\": \"e1fd4c97-bba4-40d0-8d55-a57f8fca4cd3\",\n                \"resourceVersion\": \"2\",\n                \"creationTimestamp\": \"2020-01-15T16:10:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Endpoints\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager\",\n                \"uid\": \"230cf73b-020c-4776-a0bb-2eacad67fce6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"110\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"bootstrap-e2e-master_197334f0-6e8d-4b10-b666-0e8fc3e0a58b became leader\",\n            \"source\": {\n                \"component\": \"kube-controller-manager\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:10:55Z\",\n            \"lastTimestamp\": \"2020-01-15T16:10:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager.15ea1b516d3c8c61\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-controller-manager.15ea1b516d3c8c61\",\n                \"uid\": \"6103b6da-e87d-4ef4-89e8-2b96743e7fe5\",\n                \"resourceVersion\": \"3\",\n                \"creationTimestamp\": \"2020-01-15T16:10:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager\",\n                \"uid\": \"f224ba99-8767-40cd-8005-7f314367b9ab\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"112\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"bootstrap-e2e-master_197334f0-6e8d-4b10-b666-0e8fc3e0a58b became leader\",\n            \"source\": {\n                \"component\": \"kube-controller-manager\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:10:55Z\",\n            \"lastTimestamp\": \"2020-01-15T16:10:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-c4f5l.15ea1b8fc2af154c\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-dns-autoscaler-65bc6d4889-c4f5l.15ea1b8fc2af154c\",\n                \"uid\": \"a5363822-0240-4499-86e5-8d9e188f3e60\",\n                \"resourceVersion\": \"1089\",\n                \"creationTimestamp\": \"2020-01-15T16:15:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-c4f5l\",\n                \"uid\": \"43ee7194-5d2d-47ea-9ba6-cd35339197f4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"3001\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-dns-autoscaler-65bc6d4889-c4f5l to bootstrap-e2e-minion-group-qkcq\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:15:23Z\",\n            \"lastTimestamp\": \"2020-01-15T16:15:23Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-c4f5l.15ea1b900ea50bb4\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-dns-autoscaler-65bc6d4889-c4f5l.15ea1b900ea50bb4\",\n                \"uid\": \"55879d05-b525-499f-b938-6518c713d2c6\",\n                \"resourceVersion\": \"1103\",\n                \"creationTimestamp\": \"2020-01-15T16:15:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-c4f5l\",\n                \"uid\": \"43ee7194-5d2d-47ea-9ba6-cd35339197f4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"3005\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:15:24Z\",\n            \"lastTimestamp\": \"2020-01-15T16:15:24Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-c4f5l.15ea1b9012fd195a\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-dns-autoscaler-65bc6d4889-c4f5l.15ea1b9012fd195a\",\n                \"uid\": \"57327ea1-2fa7-45bf-b07c-ad8491774254\",\n                \"resourceVersion\": \"1104\",\n                \"creationTimestamp\": \"2020-01-15T16:15:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-c4f5l\",\n                \"uid\": \"43ee7194-5d2d-47ea-9ba6-cd35339197f4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"3005\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:15:24Z\",\n            \"lastTimestamp\": \"2020-01-15T16:15:24Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-c4f5l.15ea1b9024d83fec\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-dns-autoscaler-65bc6d4889-c4f5l.15ea1b9024d83fec\",\n                \"uid\": \"03aef8f6-79b7-4639-a119-f4a144859b11\",\n                \"resourceVersion\": \"1105\",\n                \"creationTimestamp\": \"2020-01-15T16:15:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-c4f5l\",\n                \"uid\": \"43ee7194-5d2d-47ea-9ba6-cd35339197f4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"3005\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:15:25Z\",\n            \"lastTimestamp\": \"2020-01-15T16:15:25Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-sqctq.15ea1b58598ef156\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-dns-autoscaler-65bc6d4889-sqctq.15ea1b58598ef156\",\n                \"uid\": \"5e2e51d1-f144-4564-86d5-8a091212a14d\",\n                \"resourceVersion\": \"74\",\n                \"creationTimestamp\": \"2020-01-15T16:11:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-sqctq\",\n                \"uid\": \"a85afc3b-7d20-4259-a005-239c2e1b197f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"508\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"no nodes available to schedule pods\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:25Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:25Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-sqctq.15ea1b58a65e879e\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-dns-autoscaler-65bc6d4889-sqctq.15ea1b58a65e879e\",\n                \"uid\": \"995ba5ea-045b-4939-805c-c75d228a1b73\",\n                \"resourceVersion\": \"86\",\n                \"creationTimestamp\": \"2020-01-15T16:11:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-sqctq\",\n                \"uid\": \"a85afc3b-7d20-4259-a005-239c2e1b197f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"511\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) were unschedulable.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:26Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:27Z\",\n            \"count\": 2,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-sqctq.15ea1b598db30a2a\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-dns-autoscaler-65bc6d4889-sqctq.15ea1b598db30a2a\",\n                \"uid\": \"6b3cf1e3-2306-47d7-8563-9515ce705174\",\n                \"resourceVersion\": \"264\",\n                \"creationTimestamp\": \"2020-01-15T16:11:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-sqctq\",\n                \"uid\": \"a85afc3b-7d20-4259-a005-239c2e1b197f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"535\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:39Z\",\n            \"count\": 4,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-sqctq.15ea1b5d6133d5b6\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-dns-autoscaler-65bc6d4889-sqctq.15ea1b5d6133d5b6\",\n                \"uid\": \"9b3969ac-4615-4754-aeb4-a8918594be3d\",\n                \"resourceVersion\": \"309\",\n                \"creationTimestamp\": \"2020-01-15T16:11:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-sqctq\",\n                \"uid\": \"a85afc3b-7d20-4259-a005-239c2e1b197f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"622\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-dns-autoscaler-65bc6d4889-sqctq to bootstrap-e2e-minion-group-qkcq\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:47Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:47Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-sqctq.15ea1b5d95e4ebd7\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-dns-autoscaler-65bc6d4889-sqctq.15ea1b5d95e4ebd7\",\n                \"uid\": \"6ea5f99b-00b9-4257-9200-750b9b0d8713\",\n                \"resourceVersion\": \"311\",\n                \"creationTimestamp\": \"2020-01-15T16:11:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-sqctq\",\n                \"uid\": \"a85afc3b-7d20-4259-a005-239c2e1b197f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"743\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:48Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:48Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-sqctq.15ea1b5e1c6d19d6\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-dns-autoscaler-65bc6d4889-sqctq.15ea1b5e1c6d19d6\",\n                \"uid\": \"9d5cfbf5-79f3-48bb-863a-f650da582f50\",\n                \"resourceVersion\": \"315\",\n                \"creationTimestamp\": \"2020-01-15T16:11:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-sqctq\",\n                \"uid\": \"a85afc3b-7d20-4259-a005-239c2e1b197f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"743\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:50Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:50Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-sqctq.15ea1b5e2e922683\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-dns-autoscaler-65bc6d4889-sqctq.15ea1b5e2e922683\",\n                \"uid\": \"ca54775d-72b6-47a4-8299-df9f42426c3b\",\n                \"resourceVersion\": \"316\",\n                \"creationTimestamp\": \"2020-01-15T16:11:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-sqctq\",\n                \"uid\": \"a85afc3b-7d20-4259-a005-239c2e1b197f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"743\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:50Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:50Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-sqctq.15ea1b5e3f59bf39\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-dns-autoscaler-65bc6d4889-sqctq.15ea1b5e3f59bf39\",\n                \"uid\": \"2af62c08-79af-4482-8395-0052f2c73e6e\",\n                \"resourceVersion\": \"317\",\n                \"creationTimestamp\": \"2020-01-15T16:11:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-sqctq\",\n                \"uid\": \"a85afc3b-7d20-4259-a005-239c2e1b197f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"743\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:50Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:50Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-sqctq.15ea1b8fbd0bc20c\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-dns-autoscaler-65bc6d4889-sqctq.15ea1b8fbd0bc20c\",\n                \"uid\": \"6a435d38-4e3a-4667-a53e-1b9db3ab561e\",\n                \"resourceVersion\": \"1084\",\n                \"creationTimestamp\": \"2020-01-15T16:15:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns-autoscaler-65bc6d4889-sqctq\",\n                \"uid\": \"a85afc3b-7d20-4259-a005-239c2e1b197f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"743\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Killing\",\n            \"message\": \"Stopping container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:15:23Z\",\n            \"lastTimestamp\": \"2020-01-15T16:15:23Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns-autoscaler-65bc6d4889.15ea1b55be8dbf5e\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-dns-autoscaler-65bc6d4889.15ea1b55be8dbf5e\",\n                \"uid\": \"43a03f33-89cf-4120-87e4-b5cb2473781f\",\n                \"resourceVersion\": \"58\",\n                \"creationTimestamp\": \"2020-01-15T16:11:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns-autoscaler-65bc6d4889\",\n                \"uid\": \"f71df3f8-e404-4e81-a096-559cdd2961cb\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"321\"\n            },\n            \"reason\": \"FailedCreate\",\n            \"message\": \"Error creating: pods \\\"kube-dns-autoscaler-65bc6d4889-\\\" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount \\\"kube-dns-autoscaler\\\" not found\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:14Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:20Z\",\n            \"count\": 11,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns-autoscaler-65bc6d4889.15ea1b58598367cb\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-dns-autoscaler-65bc6d4889.15ea1b58598367cb\",\n                \"uid\": \"05f4b42a-04f1-44d5-b947-dc88bcab3bbc\",\n                \"resourceVersion\": \"73\",\n                \"creationTimestamp\": \"2020-01-15T16:11:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns-autoscaler-65bc6d4889\",\n                \"uid\": \"f71df3f8-e404-4e81-a096-559cdd2961cb\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"323\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-dns-autoscaler-65bc6d4889-sqctq\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:25Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:25Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns-autoscaler-65bc6d4889.15ea1b8fc07c1018\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-dns-autoscaler-65bc6d4889.15ea1b8fc07c1018\",\n                \"uid\": \"b0b2de3a-0a88-4e56-b2eb-9a03f256b994\",\n                \"resourceVersion\": \"1088\",\n                \"creationTimestamp\": \"2020-01-15T16:15:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns-autoscaler-65bc6d4889\",\n                \"uid\": \"f71df3f8-e404-4e81-a096-559cdd2961cb\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"764\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-dns-autoscaler-65bc6d4889-c4f5l\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:15:23Z\",\n            \"lastTimestamp\": \"2020-01-15T16:15:23Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns-autoscaler.15ea1b55bdf82a37\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-dns-autoscaler.15ea1b55bdf82a37\",\n                \"uid\": \"90441958-c2d5-424d-88d3-7cad0df52859\",\n                \"resourceVersion\": \"18\",\n                \"creationTimestamp\": \"2020-01-15T16:11:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns-autoscaler\",\n                \"uid\": \"534d8b40-53ec-4e6a-baa0-a17311766f62\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"320\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set kube-dns-autoscaler-65bc6d4889 to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:14Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-q10p.15ea1b59909b5ea2\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-bootstrap-e2e-minion-group-q10p.15ea1b59909b5ea2\",\n                \"uid\": \"77a22170-70af-432e-827a-ba3281d8c0f5\",\n                \"resourceVersion\": \"158\",\n                \"creationTimestamp\": \"2020-01-15T16:11:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-q10p\",\n                \"uid\": \"5e0f693828b9812ac9354fee0a49033c\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.755_05209312b74eac\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-q10p.15ea1b5995a49edf\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-bootstrap-e2e-minion-group-q10p.15ea1b5995a49edf\",\n                \"uid\": \"c8c8d8df-a68d-43d7-9835-036ea0f7f52b\",\n                \"resourceVersion\": \"163\",\n                \"creationTimestamp\": \"2020-01-15T16:11:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-q10p\",\n                \"uid\": \"5e0f693828b9812ac9354fee0a49033c\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-q10p.15ea1b59a0665123\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-bootstrap-e2e-minion-group-q10p.15ea1b59a0665123\",\n                \"uid\": \"1310389d-1eae-441d-bbad-421d06c10ee9\",\n                \"resourceVersion\": \"170\",\n                \"creationTimestamp\": \"2020-01-15T16:11:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-q10p\",\n                \"uid\": \"5e0f693828b9812ac9354fee0a49033c\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-qkcq.15ea1b59b85cd52c\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-bootstrap-e2e-minion-group-qkcq.15ea1b59b85cd52c\",\n                \"uid\": \"58ba075e-a56f-4ae4-8a80-fd8b2c158e8a\",\n                \"resourceVersion\": \"165\",\n                \"creationTimestamp\": \"2020-01-15T16:11:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-qkcq\",\n                \"uid\": \"5e0f693828b9812ac9354fee0a49033c\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.755_05209312b74eac\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-qkcq.15ea1b59bbd485cd\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-bootstrap-e2e-minion-group-qkcq.15ea1b59bbd485cd\",\n                \"uid\": \"809a1b7f-aae6-4622-8929-d77405585220\",\n                \"resourceVersion\": \"172\",\n                \"creationTimestamp\": \"2020-01-15T16:11:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-qkcq\",\n                \"uid\": \"5e0f693828b9812ac9354fee0a49033c\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-qkcq.15ea1b59c456f146\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-bootstrap-e2e-minion-group-qkcq.15ea1b59c456f146\",\n                \"uid\": \"0a340e0f-086e-4b97-abb1-51a40cc1eedb\",\n                \"resourceVersion\": \"177\",\n                \"creationTimestamp\": \"2020-01-15T16:11:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-qkcq\",\n                \"uid\": \"5e0f693828b9812ac9354fee0a49033c\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-qn53.15ea1b598675d8e6\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-bootstrap-e2e-minion-group-qn53.15ea1b598675d8e6\",\n                \"uid\": \"b9a519db-54cd-436a-bc77-914610f22dba\",\n                \"resourceVersion\": \"147\",\n                \"creationTimestamp\": \"2020-01-15T16:11:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-qn53\",\n                \"uid\": \"5e0f693828b9812ac9354fee0a49033c\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.755_05209312b74eac\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-qn53.15ea1b598aba3236\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-bootstrap-e2e-minion-group-qn53.15ea1b598aba3236\",\n                \"uid\": \"6830bebd-1b5c-48f0-93af-dc332e3bbf0f\",\n                \"resourceVersion\": \"151\",\n                \"creationTimestamp\": \"2020-01-15T16:11:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-qn53\",\n                \"uid\": \"5e0f693828b9812ac9354fee0a49033c\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-qn53.15ea1b5993235fa4\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-bootstrap-e2e-minion-group-qn53.15ea1b5993235fa4\",\n                \"uid\": \"1df47a45-3500-4338-a127-01d4cdc18b5c\",\n                \"resourceVersion\": \"156\",\n                \"creationTimestamp\": \"2020-01-15T16:11:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-qn53\",\n                \"uid\": \"5e0f693828b9812ac9354fee0a49033c\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-vrtv.15ea1b59d87904e7\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-bootstrap-e2e-minion-group-vrtv.15ea1b59d87904e7\",\n                \"uid\": \"86a70ea3-9a7d-4a48-a9db-fc25c55475e2\",\n                \"resourceVersion\": \"200\",\n                \"creationTimestamp\": \"2020-01-15T16:11:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-vrtv\",\n                \"uid\": \"5e0f693828b9812ac9354fee0a49033c\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy-amd64:v1.18.0-alpha.1.755_05209312b74eac\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-vrtv\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:32Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-vrtv.15ea1b59deb3ed0c\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-bootstrap-e2e-minion-group-vrtv.15ea1b59deb3ed0c\",\n                \"uid\": \"e5f68df5-72f4-4666-a012-44188eb51f33\",\n                \"resourceVersion\": \"202\",\n                \"creationTimestamp\": \"2020-01-15T16:11:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-vrtv\",\n                \"uid\": \"5e0f693828b9812ac9354fee0a49033c\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-vrtv\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:32Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-vrtv.15ea1b59e99fd63a\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-bootstrap-e2e-minion-group-vrtv.15ea1b59e99fd63a\",\n                \"uid\": \"8b153ee0-439f-4856-bbc7-4452e58d67c4\",\n                \"resourceVersion\": \"205\",\n                \"creationTimestamp\": \"2020-01-15T16:11:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-bootstrap-e2e-minion-group-vrtv\",\n                \"uid\": \"5e0f693828b9812ac9354fee0a49033c\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-vrtv\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:32Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler.15ea1b51a7d750c8\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-scheduler.15ea1b51a7d750c8\",\n                \"uid\": \"51e703fc-671a-487c-b5c8-038889f7a72f\",\n                \"resourceVersion\": \"4\",\n                \"creationTimestamp\": \"2020-01-15T16:10:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Endpoints\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler\",\n                \"uid\": \"15c9f11d-009b-470b-b05c-f262110c0c8e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"153\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"bootstrap-e2e-master_5d7b243b-8849-4a10-baf7-fc0a85897178 became leader\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:10:56Z\",\n            \"lastTimestamp\": \"2020-01-15T16:10:56Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler.15ea1b51a7d780fa\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-scheduler.15ea1b51a7d780fa\",\n                \"uid\": \"78bfc9d0-6429-4ec5-8d38-f75db23c6526\",\n                \"resourceVersion\": \"5\",\n                \"creationTimestamp\": \"2020-01-15T16:10:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler\",\n                \"uid\": \"73b6e660-d22b-430f-953f-7f0f8ec5a029\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"154\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"bootstrap-e2e-master_5d7b243b-8849-4a10-baf7-fc0a85897178 became leader\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:10:56Z\",\n            \"lastTimestamp\": \"2020-01-15T16:10:56Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kubernetes-dashboard-7778f8b456-wjltm.15ea1b572b05b5b9\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kubernetes-dashboard-7778f8b456-wjltm.15ea1b572b05b5b9\",\n                \"uid\": \"223c6e94-720f-4c64-8301-c259c9ba1d42\",\n                \"resourceVersion\": \"62\",\n                \"creationTimestamp\": \"2020-01-15T16:11:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kubernetes-dashboard-7778f8b456-wjltm\",\n                \"uid\": \"d27b30cd-9233-4cd2-bbc1-d07a21ded57f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"438\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"no nodes available to schedule pods\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:20Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:20Z\",\n            \"count\": 2,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kubernetes-dashboard-7778f8b456-wjltm.15ea1b589a11ba03\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kubernetes-dashboard-7778f8b456-wjltm.15ea1b589a11ba03\",\n                \"uid\": \"c45cf765-2b59-4c2d-a1cd-a0be1a1b0444\",\n                \"resourceVersion\": \"81\",\n                \"creationTimestamp\": \"2020-01-15T16:11:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kubernetes-dashboard-7778f8b456-wjltm\",\n                \"uid\": \"d27b30cd-9233-4cd2-bbc1-d07a21ded57f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"440\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) were unschedulable.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:26Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:26Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kubernetes-dashboard-7778f8b456-wjltm.15ea1b5933144254\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kubernetes-dashboard-7778f8b456-wjltm.15ea1b5933144254\",\n                \"uid\": \"5b511635-ad91-4a13-b2d0-5c115f4b3905\",\n                \"resourceVersion\": \"104\",\n                \"creationTimestamp\": \"2020-01-15T16:11:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kubernetes-dashboard-7778f8b456-wjltm\",\n                \"uid\": \"d27b30cd-9233-4cd2-bbc1-d07a21ded57f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"532\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 1 node(s) were unschedulable.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:29Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:29Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kubernetes-dashboard-7778f8b456-wjltm.15ea1b59b45b4f02\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kubernetes-dashboard-7778f8b456-wjltm.15ea1b59b45b4f02\",\n                \"uid\": \"dcf28c85-f4c0-4f92-8e34-c442bc91e407\",\n                \"resourceVersion\": \"299\",\n                \"creationTimestamp\": \"2020-01-15T16:11:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kubernetes-dashboard-7778f8b456-wjltm\",\n                \"uid\": \"d27b30cd-9233-4cd2-bbc1-d07a21ded57f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"568\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:44Z\",\n            \"count\": 4,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kubernetes-dashboard-7778f8b456-wjltm.15ea1b5f3da09bb9\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kubernetes-dashboard-7778f8b456-wjltm.15ea1b5f3da09bb9\",\n                \"uid\": \"f15fdabd-f0d9-45a9-b35b-983b512e4d2b\",\n                \"resourceVersion\": \"337\",\n                \"creationTimestamp\": \"2020-01-15T16:11:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kubernetes-dashboard-7778f8b456-wjltm\",\n                \"uid\": \"d27b30cd-9233-4cd2-bbc1-d07a21ded57f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"640\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kubernetes-dashboard-7778f8b456-wjltm to bootstrap-e2e-minion-group-qkcq\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:55Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kubernetes-dashboard-7778f8b456-wjltm.15ea1b5fa6103820\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kubernetes-dashboard-7778f8b456-wjltm.15ea1b5fa6103820\",\n                \"uid\": \"01c0b1aa-3eca-40ba-b48d-d5d01c7010b4\",\n                \"resourceVersion\": \"351\",\n                \"creationTimestamp\": \"2020-01-15T16:11:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kubernetes-dashboard-7778f8b456-wjltm\",\n                \"uid\": \"d27b30cd-9233-4cd2-bbc1-d07a21ded57f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"799\",\n                \"fieldPath\": \"spec.containers{kubernetes-dashboard}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:56Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:56Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kubernetes-dashboard-7778f8b456-wjltm.15ea1b605a7aed04\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kubernetes-dashboard-7778f8b456-wjltm.15ea1b605a7aed04\",\n                \"uid\": \"2bfb8df2-c82d-49fe-89ca-86b764842c57\",\n                \"resourceVersion\": \"361\",\n                \"creationTimestamp\": \"2020-01-15T16:11:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kubernetes-dashboard-7778f8b456-wjltm\",\n                \"uid\": \"d27b30cd-9233-4cd2-bbc1-d07a21ded57f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"799\",\n                \"fieldPath\": \"spec.containers{kubernetes-dashboard}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:59Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:59Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kubernetes-dashboard-7778f8b456-wjltm.15ea1b609a56df3e\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kubernetes-dashboard-7778f8b456-wjltm.15ea1b609a56df3e\",\n                \"uid\": \"d5d31e82-9036-4438-bcda-210e369ae54b\",\n                \"resourceVersion\": \"365\",\n                \"creationTimestamp\": \"2020-01-15T16:12:01Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kubernetes-dashboard-7778f8b456-wjltm\",\n                \"uid\": \"d27b30cd-9233-4cd2-bbc1-d07a21ded57f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"799\",\n                \"fieldPath\": \"spec.containers{kubernetes-dashboard}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kubernetes-dashboard\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:01Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kubernetes-dashboard-7778f8b456-wjltm.15ea1b60bdaff77b\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kubernetes-dashboard-7778f8b456-wjltm.15ea1b60bdaff77b\",\n                \"uid\": \"b66ec848-3631-4334-ad51-0cf149517a9d\",\n                \"resourceVersion\": \"372\",\n                \"creationTimestamp\": \"2020-01-15T16:12:01Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kubernetes-dashboard-7778f8b456-wjltm\",\n                \"uid\": \"d27b30cd-9233-4cd2-bbc1-d07a21ded57f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"799\",\n                \"fieldPath\": \"spec.containers{kubernetes-dashboard}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kubernetes-dashboard\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:01Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kubernetes-dashboard-7778f8b456.15ea1b572b0ff9ad\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kubernetes-dashboard-7778f8b456.15ea1b572b0ff9ad\",\n                \"uid\": \"a230c873-fd63-422d-8d09-76bd6c39dc2f\",\n                \"resourceVersion\": \"60\",\n                \"creationTimestamp\": \"2020-01-15T16:11:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kubernetes-dashboard-7778f8b456\",\n                \"uid\": \"73b8c55d-67b5-4bda-aede-d7e8998ddcd1\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"436\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kubernetes-dashboard-7778f8b456-wjltm\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:20Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:20Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kubernetes-dashboard.15ea1b572a33684f\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kubernetes-dashboard.15ea1b572a33684f\",\n                \"uid\": \"8b2a780f-6980-4e6e-8327-3fa8a0542c63\",\n                \"resourceVersion\": \"59\",\n                \"creationTimestamp\": \"2020-01-15T16:11:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kubernetes-dashboard\",\n                \"uid\": \"9ebd2314-16be-4977-9c0a-22adf5d70e61\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"435\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set kubernetes-dashboard-7778f8b456 to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:20Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:20Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"l7-default-backend-678889f899-4q2t5.15ea1b5708be5224\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/l7-default-backend-678889f899-4q2t5.15ea1b5708be5224\",\n                \"uid\": \"b8da80c1-1c44-4c59-bde4-f6c375be4619\",\n                \"resourceVersion\": \"57\",\n                \"creationTimestamp\": \"2020-01-15T16:11:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"l7-default-backend-678889f899-4q2t5\",\n                \"uid\": \"dcf2d45a-6210-4989-8872-599d096457ff\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"420\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"no nodes available to schedule pods\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:19Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:20Z\",\n            \"count\": 2,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"l7-default-backend-678889f899-4q2t5.15ea1b58e10fd3f7\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/l7-default-backend-678889f899-4q2t5.15ea1b58e10fd3f7\",\n                \"uid\": \"95821751-5b20-4fea-a078-6fe8a311d7af\",\n                \"resourceVersion\": \"93\",\n                \"creationTimestamp\": \"2020-01-15T16:11:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"l7-default-backend-678889f899-4q2t5\",\n                \"uid\": \"dcf2d45a-6210-4989-8872-599d096457ff\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"423\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) were unschedulable.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:27Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:28Z\",\n            \"count\": 2,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"l7-default-backend-678889f899-4q2t5.15ea1b5a3e2937da\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/l7-default-backend-678889f899-4q2t5.15ea1b5a3e2937da\",\n                \"uid\": \"8efb1fdc-647b-499e-9f60-8a4d019ccf37\",\n                \"resourceVersion\": \"258\",\n                \"creationTimestamp\": \"2020-01-15T16:11:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"l7-default-backend-678889f899-4q2t5\",\n                \"uid\": \"dcf2d45a-6210-4989-8872-599d096457ff\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"546\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/5 nodes are available: 1 node(s) were unschedulable, 4 node(s) had taints that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:33Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:38Z\",\n            \"count\": 2,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"l7-default-backend-678889f899-4q2t5.15ea1b5d245e3df7\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/l7-default-backend-678889f899-4q2t5.15ea1b5d245e3df7\",\n                \"uid\": \"52cc9bbe-4302-426d-b440-01200e0d5cc8\",\n                \"resourceVersion\": \"307\",\n                \"creationTimestamp\": \"2020-01-15T16:11:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"l7-default-backend-678889f899-4q2t5\",\n                \"uid\": \"dcf2d45a-6210-4989-8872-599d096457ff\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"659\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/l7-default-backend-678889f899-4q2t5 to bootstrap-e2e-minion-group-q10p\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:46Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:46Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"l7-default-backend-678889f899-4q2t5.15ea1b5f38d67703\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/l7-default-backend-678889f899-4q2t5.15ea1b5f38d67703\",\n                \"uid\": \"d0923bad-cc6b-4efc-8d69-fec4249a23b2\",\n                \"resourceVersion\": \"336\",\n                \"creationTimestamp\": \"2020-01-15T16:11:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"l7-default-backend-678889f899-4q2t5\",\n                \"uid\": \"dcf2d45a-6210-4989-8872-599d096457ff\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"734\",\n                \"fieldPath\": \"spec.containers{default-http-backend}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:55Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"l7-default-backend-678889f899-4q2t5.15ea1b5f90da0adf\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/l7-default-backend-678889f899-4q2t5.15ea1b5f90da0adf\",\n                \"uid\": \"483c216a-3c2a-473c-a76a-e77837a526ea\",\n                \"resourceVersion\": \"345\",\n                \"creationTimestamp\": \"2020-01-15T16:11:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"l7-default-backend-678889f899-4q2t5\",\n                \"uid\": \"dcf2d45a-6210-4989-8872-599d096457ff\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"734\",\n                \"fieldPath\": \"spec.containers{default-http-backend}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:56Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:56Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"l7-default-backend-678889f899-4q2t5.15ea1b5f9a80df41\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/l7-default-backend-678889f899-4q2t5.15ea1b5f9a80df41\",\n                \"uid\": \"a1ee0ca3-4a90-4c3a-b435-10642280b3e6\",\n                \"resourceVersion\": \"348\",\n                \"creationTimestamp\": \"2020-01-15T16:11:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"l7-default-backend-678889f899-4q2t5\",\n                \"uid\": \"dcf2d45a-6210-4989-8872-599d096457ff\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"734\",\n                \"fieldPath\": \"spec.containers{default-http-backend}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container default-http-backend\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:56Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:56Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"l7-default-backend-678889f899-4q2t5.15ea1b615d7c8f0c\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/l7-default-backend-678889f899-4q2t5.15ea1b615d7c8f0c\",\n                \"uid\": \"f3c551e6-1665-491e-a992-550d4b02680f\",\n                \"resourceVersion\": \"378\",\n                \"creationTimestamp\": \"2020-01-15T16:12:04Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"l7-default-backend-678889f899-4q2t5\",\n                \"uid\": \"dcf2d45a-6210-4989-8872-599d096457ff\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"734\",\n                \"fieldPath\": \"spec.containers{default-http-backend}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container default-http-backend\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:12:04Z\",\n            \"lastTimestamp\": \"2020-01-15T16:12:04Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"l7-default-backend-678889f899.15ea1b55b7c3d371\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/l7-default-backend-678889f899.15ea1b55b7c3d371\",\n                \"uid\": \"d765d584-5c33-44ae-8e06-e7ba9fc5ee16\",\n                \"resourceVersion\": \"24\",\n                \"creationTimestamp\": \"2020-01-15T16:11:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"l7-default-backend-678889f899\",\n                \"uid\": \"99b01737-190e-4761-ad8b-fca07817224a\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"309\"\n            },\n            \"reason\": \"FailedCreate\",\n            \"message\": \"Error creating: pods \\\"l7-default-backend-678889f899-\\\" is forbidden: no providers available to validate pod request\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:14Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:14Z\",\n            \"count\": 6,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"l7-default-backend-678889f899.15ea1b55d49e87dd\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/l7-default-backend-678889f899.15ea1b55d49e87dd\",\n                \"uid\": \"bcad4556-3833-4e7c-8a32-95bff1ba8313\",\n                \"resourceVersion\": \"40\",\n                \"creationTimestamp\": \"2020-01-15T16:11:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"l7-default-backend-678889f899\",\n                \"uid\": \"99b01737-190e-4761-ad8b-fca07817224a\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"311\"\n            },\n            \"reason\": \"FailedCreate\",\n            \"message\": \"Error creating: pods \\\"l7-default-backend-678889f899-\\\" is forbidden: unable to validate against any pod security policy: []\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:14Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:17Z\",\n            \"count\": 4,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"l7-default-backend-678889f899.15ea1b5708a0c4b1\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/l7-default-backend-678889f899.15ea1b5708a0c4b1\",\n                \"uid\": \"e92af641-2365-41c1-9901-222afdda56b9\",\n                \"resourceVersion\": \"55\",\n                \"creationTimestamp\": \"2020-01-15T16:11:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"l7-default-backend-678889f899\",\n                \"uid\": \"99b01737-190e-4761-ad8b-fca07817224a\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"311\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: l7-default-backend-678889f899-4q2t5\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:19Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:19Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"l7-default-backend.15ea1b55b722119b\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/l7-default-backend.15ea1b55b722119b\",\n                \"uid\": \"6115557d-cc95-4d6c-bb33-07696149839d\",\n                \"resourceVersion\": \"11\",\n                \"creationTimestamp\": \"2020-01-15T16:11:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"l7-default-backend\",\n                \"uid\": \"557c6943-b46c-4b44-8e3d-3f217045a746\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"308\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set l7-default-backend-678889f899 to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:14Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"l7-lb-controller-bootstrap-e2e-master.15ea1b52e4bd0a4c\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/l7-lb-controller-bootstrap-e2e-master.15ea1b52e4bd0a4c\",\n                \"uid\": \"816adf80-9c02-4c47-b185-f31d3b87a14c\",\n                \"resourceVersion\": \"71\",\n                \"creationTimestamp\": \"2020-01-15T16:11:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"l7-lb-controller-bootstrap-e2e-master\",\n                \"uid\": \"be7c596b31dfe9522aa72160f772b7f6\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{l7-lb-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container l7-lb-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:02Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:22Z\",\n            \"count\": 3,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"l7-lb-controller-bootstrap-e2e-master.15ea1b5323f378e4\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/l7-lb-controller-bootstrap-e2e-master.15ea1b5323f378e4\",\n                \"uid\": \"a67b479d-c741-48e9-a5ad-9a372bda6ce8\",\n                \"resourceVersion\": \"72\",\n                \"creationTimestamp\": \"2020-01-15T16:11:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"l7-lb-controller-bootstrap-e2e-master\",\n                \"uid\": \"be7c596b31dfe9522aa72160f772b7f6\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{l7-lb-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container l7-lb-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:03Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:25Z\",\n            \"count\": 3,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"l7-lb-controller-bootstrap-e2e-master.15ea1b538b7b10c4\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/l7-lb-controller-bootstrap-e2e-master.15ea1b538b7b10c4\",\n                \"uid\": \"9eaf89d8-f352-40ee-82b7-df990b38449e\",\n                \"resourceVersion\": \"70\",\n                \"creationTimestamp\": \"2020-01-15T16:11:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"l7-lb-controller-bootstrap-e2e-master\",\n                \"uid\": \"be7c596b31dfe9522aa72160f772b7f6\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{l7-lb-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/ingress-gce-glbc-amd64:v1.6.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:04Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:21Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-666fv.15ea1b59632b758d\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-666fv.15ea1b59632b758d\",\n                \"uid\": \"d558c585-7771-4b99-bd43-a3af76a4e2d4\",\n                \"resourceVersion\": \"118\",\n                \"creationTimestamp\": \"2020-01-15T16:11:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-666fv\",\n                \"uid\": \"2d23fcba-1758-495c-bd9f-86fe7151d542\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"565\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/metadata-proxy-v0.1-666fv to bootstrap-e2e-minion-group-qn53\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-666fv.15ea1b59b3d0012a\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-666fv.15ea1b59b3d0012a\",\n                \"uid\": \"b0b61b40-7c92-4365-9ee2-504e81a2005e\",\n                \"resourceVersion\": \"173\",\n                \"creationTimestamp\": \"2020-01-15T16:11:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-666fv\",\n                \"uid\": \"2d23fcba-1758-495c-bd9f-86fe7151d542\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"590\",\n                \"fieldPath\": \"spec.containers{metadata-proxy}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/metadata-proxy:v0.1.12\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-666fv.15ea1b59f552d4ab\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-666fv.15ea1b59f552d4ab\",\n                \"uid\": \"2bb6e0c2-c890-4d7b-a245-42a84b64e06c\",\n                \"resourceVersion\": \"183\",\n                \"creationTimestamp\": \"2020-01-15T16:11:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-666fv\",\n                \"uid\": \"2d23fcba-1758-495c-bd9f-86fe7151d542\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"590\",\n                \"fieldPath\": \"spec.containers{metadata-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/metadata-proxy:v0.1.12\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:32Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-666fv.15ea1b59fa149cc4\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-666fv.15ea1b59fa149cc4\",\n                \"uid\": \"59af892f-05dc-4a47-aee4-9ef51367fc33\",\n                \"resourceVersion\": \"185\",\n                \"creationTimestamp\": \"2020-01-15T16:11:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-666fv\",\n                \"uid\": \"2d23fcba-1758-495c-bd9f-86fe7151d542\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"590\",\n                \"fieldPath\": \"spec.containers{metadata-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container metadata-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:32Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-666fv.15ea1b5a3bf6d47d\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-666fv.15ea1b5a3bf6d47d\",\n                \"uid\": \"ccf59880-3de1-404d-b3a3-c937d7e0cb31\",\n                \"resourceVersion\": \"195\",\n                \"creationTimestamp\": \"2020-01-15T16:11:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-666fv\",\n                \"uid\": \"2d23fcba-1758-495c-bd9f-86fe7151d542\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"590\",\n                \"fieldPath\": \"spec.containers{metadata-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container metadata-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:33Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-666fv.15ea1b5a3d0d2419\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-666fv.15ea1b5a3d0d2419\",\n                \"uid\": \"086e6152-6643-428c-b950-d656780fe9f5\",\n                \"resourceVersion\": \"199\",\n                \"creationTimestamp\": \"2020-01-15T16:11:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-666fv\",\n                \"uid\": \"2d23fcba-1758-495c-bd9f-86fe7151d542\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"590\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/prometheus-to-sd:v0.5.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:33Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-666fv.15ea1b5aab07c276\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-666fv.15ea1b5aab07c276\",\n                \"uid\": \"dcdcebd1-7702-42ba-a22b-6d55110bf10b\",\n                \"resourceVersion\": \"247\",\n                \"creationTimestamp\": \"2020-01-15T16:11:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-666fv\",\n                \"uid\": \"2d23fcba-1758-495c-bd9f-86fe7151d542\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"590\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/prometheus-to-sd:v0.5.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:35Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-666fv.15ea1b5b0f716f55\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-666fv.15ea1b5b0f716f55\",\n                \"uid\": \"f10a46e4-dae7-4912-89bf-7aa8028ab314\",\n                \"resourceVersion\": \"256\",\n                \"creationTimestamp\": \"2020-01-15T16:11:37Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-666fv\",\n                \"uid\": \"2d23fcba-1758-495c-bd9f-86fe7151d542\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"590\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:37Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-666fv.15ea1b5b95ba5e92\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-666fv.15ea1b5b95ba5e92\",\n                \"uid\": \"5bb95b8e-a298-4eef-b376-709a968050ff\",\n                \"resourceVersion\": \"270\",\n                \"creationTimestamp\": \"2020-01-15T16:11:39Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-666fv\",\n                \"uid\": \"2d23fcba-1758-495c-bd9f-86fe7151d542\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"590\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qn53\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:39Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:39Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-9nsx7.15ea1b59997ba0f3\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-9nsx7.15ea1b59997ba0f3\",\n                \"uid\": \"4cc419cd-5947-4ed8-88b0-eb814d7e10d8\",\n                \"resourceVersion\": \"145\",\n                \"creationTimestamp\": \"2020-01-15T16:11:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-9nsx7\",\n                \"uid\": \"137330e6-7b00-4b62-a729-01b147b164e7\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"602\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/metadata-proxy-v0.1-9nsx7 to bootstrap-e2e-minion-group-qkcq\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-9nsx7.15ea1b59e1df4910\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-9nsx7.15ea1b59e1df4910\",\n                \"uid\": \"60c37c7f-9cfc-4511-8e9b-a23ad606d390\",\n                \"resourceVersion\": \"182\",\n                \"creationTimestamp\": \"2020-01-15T16:11:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-9nsx7\",\n                \"uid\": \"137330e6-7b00-4b62-a729-01b147b164e7\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"621\"\n            },\n            \"reason\": \"FailedMount\",\n            \"message\": \"MountVolume.SetUp failed for volume \\\"metadata-proxy-token-mplx6\\\" : failed to sync secret cache: timed out waiting for the condition\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:32Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:32Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-9nsx7.15ea1b5a5b2a5a48\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-9nsx7.15ea1b5a5b2a5a48\",\n                \"uid\": \"1e71f78c-ab60-4c67-a3ad-27fe8a32cb10\",\n                \"resourceVersion\": \"216\",\n                \"creationTimestamp\": \"2020-01-15T16:11:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-9nsx7\",\n                \"uid\": \"137330e6-7b00-4b62-a729-01b147b164e7\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"621\",\n                \"fieldPath\": \"spec.containers{metadata-proxy}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/metadata-proxy:v0.1.12\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:34Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-9nsx7.15ea1b5ac0f73472\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-9nsx7.15ea1b5ac0f73472\",\n                \"uid\": \"02345297-76d8-4a67-be1d-ae9da056d9f4\",\n                \"resourceVersion\": \"250\",\n                \"creationTimestamp\": \"2020-01-15T16:11:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-9nsx7\",\n                \"uid\": \"137330e6-7b00-4b62-a729-01b147b164e7\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"621\",\n                \"fieldPath\": \"spec.containers{metadata-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/metadata-proxy:v0.1.12\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:35Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-9nsx7.15ea1b5b1260f427\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-9nsx7.15ea1b5b1260f427\",\n                \"uid\": \"74737c73-ad6e-4aeb-982b-5bcb612d382e\",\n                \"resourceVersion\": \"257\",\n                \"creationTimestamp\": \"2020-01-15T16:11:37Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-9nsx7\",\n                \"uid\": \"137330e6-7b00-4b62-a729-01b147b164e7\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"621\",\n                \"fieldPath\": \"spec.containers{metadata-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container metadata-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:37Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-9nsx7.15ea1b5b442fa3aa\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-9nsx7.15ea1b5b442fa3aa\",\n                \"uid\": \"93be997c-a438-4b73-a053-2316c7931a77\",\n                \"resourceVersion\": \"259\",\n                \"creationTimestamp\": \"2020-01-15T16:11:38Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-9nsx7\",\n                \"uid\": \"137330e6-7b00-4b62-a729-01b147b164e7\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"621\",\n                \"fieldPath\": \"spec.containers{metadata-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container metadata-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:38Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-9nsx7.15ea1b5b44808028\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-9nsx7.15ea1b5b44808028\",\n                \"uid\": \"27bb6a48-10e7-4501-9ae8-f8e453c96477\",\n                \"resourceVersion\": \"260\",\n                \"creationTimestamp\": \"2020-01-15T16:11:38Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-9nsx7\",\n                \"uid\": \"137330e6-7b00-4b62-a729-01b147b164e7\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"621\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/prometheus-to-sd:v0.5.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:38Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-9nsx7.15ea1b5b955babcc\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-9nsx7.15ea1b5b955babcc\",\n                \"uid\": \"d1d89112-c395-4727-b782-26a0d051db1b\",\n                \"resourceVersion\": \"269\",\n                \"creationTimestamp\": \"2020-01-15T16:11:39Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-9nsx7\",\n                \"uid\": \"137330e6-7b00-4b62-a729-01b147b164e7\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"621\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/prometheus-to-sd:v0.5.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:39Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:39Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-9nsx7.15ea1b5bebde3363\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-9nsx7.15ea1b5bebde3363\",\n                \"uid\": \"37d0422d-5214-4274-9d8f-063ab27c552b\",\n                \"resourceVersion\": \"273\",\n                \"creationTimestamp\": \"2020-01-15T16:11:40Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-9nsx7\",\n                \"uid\": \"137330e6-7b00-4b62-a729-01b147b164e7\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"621\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:40Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:40Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-9nsx7.15ea1b5c2ce5b31b\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-9nsx7.15ea1b5c2ce5b31b\",\n                \"uid\": \"2dfe3e7f-70f8-4308-b63f-14c81da546e5\",\n                \"resourceVersion\": \"277\",\n                \"creationTimestamp\": \"2020-01-15T16:11:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-9nsx7\",\n                \"uid\": \"137330e6-7b00-4b62-a729-01b147b164e7\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"621\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-qkcq\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:42Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-chbgg.15ea1b588c2e8334\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-chbgg.15ea1b588c2e8334\",\n                \"uid\": \"6275b2fd-3c06-493f-8dac-7784459ee60a\",\n                \"resourceVersion\": \"79\",\n                \"creationTimestamp\": \"2020-01-15T16:11:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-chbgg\",\n                \"uid\": \"12d3314f-5cd7-42c6-a8a7-a741f9017ff4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"514\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/metadata-proxy-v0.1-chbgg to bootstrap-e2e-master\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:26Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-chbgg.15ea1b5902b3bfe5\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-chbgg.15ea1b5902b3bfe5\",\n                \"uid\": \"6243b2a0-c2e4-43d1-a2da-77ffc9bd571a\",\n                \"resourceVersion\": \"92\",\n                \"creationTimestamp\": \"2020-01-15T16:11:28Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-chbgg\",\n                \"uid\": \"12d3314f-5cd7-42c6-a8a7-a741f9017ff4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"529\",\n                \"fieldPath\": \"spec.containers{metadata-proxy}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/metadata-proxy:v0.1.12\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:28Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:28Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-chbgg.15ea1b5936598aab\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-chbgg.15ea1b5936598aab\",\n                \"uid\": \"96a877ba-5409-4794-b5f3-669f8e149463\",\n                \"resourceVersion\": \"98\",\n                \"creationTimestamp\": \"2020-01-15T16:11:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-chbgg\",\n                \"uid\": \"12d3314f-5cd7-42c6-a8a7-a741f9017ff4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"529\",\n                \"fieldPath\": \"spec.containers{metadata-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/metadata-proxy:v0.1.12\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:29Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-chbgg.15ea1b594377b77b\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-chbgg.15ea1b594377b77b\",\n                \"uid\": \"b5d4b813-63f3-4f9c-81e5-136307f30881\",\n                \"resourceVersion\": \"103\",\n                \"creationTimestamp\": \"2020-01-15T16:11:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-chbgg\",\n                \"uid\": \"12d3314f-5cd7-42c6-a8a7-a741f9017ff4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"529\",\n                \"fieldPath\": \"spec.containers{metadata-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container metadata-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:29Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-chbgg.15ea1b59803f5f3e\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-chbgg.15ea1b59803f5f3e\",\n                \"uid\": \"988a86a7-42cc-4c25-8b68-e4cba3fa058f\",\n                \"resourceVersion\": \"124\",\n                \"creationTimestamp\": \"2020-01-15T16:11:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-chbgg\",\n                \"uid\": \"12d3314f-5cd7-42c6-a8a7-a741f9017ff4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"529\",\n                \"fieldPath\": \"spec.containers{metadata-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container metadata-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-chbgg.15ea1b5980847d8f\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-chbgg.15ea1b5980847d8f\",\n                \"uid\": \"2984404f-2e59-4b1b-b354-9a442dcf73d2\",\n                \"resourceVersion\": \"128\",\n                \"creationTimestamp\": \"2020-01-15T16:11:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-chbgg\",\n                \"uid\": \"12d3314f-5cd7-42c6-a8a7-a741f9017ff4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"529\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/prometheus-to-sd:v0.5.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-chbgg.15ea1b59ef7cf37f\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-chbgg.15ea1b59ef7cf37f\",\n                \"uid\": \"e1a795be-4bc3-4661-b150-b5f87befe5ac\",\n                \"resourceVersion\": \"180\",\n                \"creationTimestamp\": \"2020-01-15T16:11:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-chbgg\",\n                \"uid\": \"12d3314f-5cd7-42c6-a8a7-a741f9017ff4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"529\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/prometheus-to-sd:v0.5.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:32Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-chbgg.15ea1b5a33711075\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-chbgg.15ea1b5a33711075\",\n                \"uid\": \"f0f31e68-e397-431d-99d6-d2f1703e522b\",\n                \"resourceVersion\": \"193\",\n                \"creationTimestamp\": \"2020-01-15T16:11:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-chbgg\",\n                \"uid\": \"12d3314f-5cd7-42c6-a8a7-a741f9017ff4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"529\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:33Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-chbgg.15ea1b5a6d088d1a\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-chbgg.15ea1b5a6d088d1a\",\n                \"uid\": \"d062401b-373f-4ee4-89a7-40a75759864f\",\n                \"resourceVersion\": \"230\",\n                \"creationTimestamp\": \"2020-01-15T16:11:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-chbgg\",\n                \"uid\": \"12d3314f-5cd7-42c6-a8a7-a741f9017ff4\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"529\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-master\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:34Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-nkdb2.15ea1b598257f78e\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-nkdb2.15ea1b598257f78e\",\n                \"uid\": \"6ebe96ec-337a-49c1-8388-3f44fe0938d5\",\n                \"resourceVersion\": \"134\",\n                \"creationTimestamp\": \"2020-01-15T16:11:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-nkdb2\",\n                \"uid\": \"ffe47122-0609-447f-8307-4e4b514eb001\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"584\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/metadata-proxy-v0.1-nkdb2 to bootstrap-e2e-minion-group-q10p\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-nkdb2.15ea1b59c9e6f115\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-nkdb2.15ea1b59c9e6f115\",\n                \"uid\": \"b0a3e6db-c878-415b-8a16-1e867cf07d70\",\n                \"resourceVersion\": \"188\",\n                \"creationTimestamp\": \"2020-01-15T16:11:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-nkdb2\",\n                \"uid\": \"ffe47122-0609-447f-8307-4e4b514eb001\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"605\"\n            },\n            \"reason\": \"FailedMount\",\n            \"message\": \"MountVolume.SetUp failed for volume \\\"metadata-proxy-token-mplx6\\\" : failed to sync secret cache: timed out waiting for the condition\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:31Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-nkdb2.15ea1b5a3bc3de25\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-nkdb2.15ea1b5a3bc3de25\",\n                \"uid\": \"45fd484a-7947-4bf3-a9fe-3ac0801160d1\",\n                \"resourceVersion\": \"197\",\n                \"creationTimestamp\": \"2020-01-15T16:11:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-nkdb2\",\n                \"uid\": \"ffe47122-0609-447f-8307-4e4b514eb001\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"605\",\n                \"fieldPath\": \"spec.containers{metadata-proxy}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/metadata-proxy:v0.1.12\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:33Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-nkdb2.15ea1b5ab103d0a0\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-nkdb2.15ea1b5ab103d0a0\",\n                \"uid\": \"061e172a-9085-483b-bd5e-399c206ed37d\",\n                \"resourceVersion\": \"249\",\n                \"creationTimestamp\": \"2020-01-15T16:11:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-nkdb2\",\n                \"uid\": \"ffe47122-0609-447f-8307-4e4b514eb001\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"605\",\n                \"fieldPath\": \"spec.containers{metadata-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/metadata-proxy:v0.1.12\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:35Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-nkdb2.15ea1b5b06f81b8c\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-nkdb2.15ea1b5b06f81b8c\",\n                \"uid\": \"48ab2ad1-b721-4dbe-941a-5b3e82e19269\",\n                \"resourceVersion\": \"254\",\n                \"creationTimestamp\": \"2020-01-15T16:11:37Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-nkdb2\",\n                \"uid\": \"ffe47122-0609-447f-8307-4e4b514eb001\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"605\",\n                \"fieldPath\": \"spec.containers{metadata-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container metadata-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:37Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-nkdb2.15ea1b5b4e776732\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-nkdb2.15ea1b5b4e776732\",\n                \"uid\": \"5e9d2d45-55cb-4456-9fc0-a4bccb490846\",\n                \"resourceVersion\": \"261\",\n                \"creationTimestamp\": \"2020-01-15T16:11:38Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-nkdb2\",\n                \"uid\": \"ffe47122-0609-447f-8307-4e4b514eb001\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"605\",\n                \"fieldPath\": \"spec.containers{metadata-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container metadata-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:38Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-nkdb2.15ea1b5b4ec2629b\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-nkdb2.15ea1b5b4ec2629b\",\n                \"uid\": \"98fc8d40-5cb2-489c-a7a6-c2599fd68905\",\n                \"resourceVersion\": \"262\",\n                \"creationTimestamp\": \"2020-01-15T16:11:38Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-nkdb2\",\n                \"uid\": \"ffe47122-0609-447f-8307-4e4b514eb001\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"605\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/prometheus-to-sd:v0.5.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:38Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-nkdb2.15ea1b5b9dec191a\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-nkdb2.15ea1b5b9dec191a\",\n                \"uid\": \"ac968184-bced-4e77-be30-5fcb48c29d13\",\n                \"resourceVersion\": \"271\",\n                \"creationTimestamp\": \"2020-01-15T16:11:39Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-nkdb2\",\n                \"uid\": \"ffe47122-0609-447f-8307-4e4b514eb001\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"605\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/prometheus-to-sd:v0.5.0\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"bootstrap-e2e-minion-group-q10p\"\n            },\n            \"firstTimestamp\": \"2020-01-15T16:11:39Z\",\n            \"lastTimestamp\": \"2020-01-15T16:11:39Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"metadata-proxy-v0.1-nkdb2.15ea1b5bef2706c7\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/metadata-proxy-v0.1-nkdb2.15ea1b5bef2706c7\",\n                \"uid\": \"783ea0cc-a83f-4379-8406-ca08cb3449c4\",\n                \"resourceVersion\": \"274\",\n                \"creationTimestamp\": \"2020-01-15T16:11:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"metadata-proxy-v0.1-nkdb2\",\n                \"uid\": \"ffe47122-0609-447f-8307-4e4b514eb001\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"605\",\n                \"fieldPath\": \"spec.containers{prometheus-to-sd-exporter}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container prometheus-to-sd-exporter\",\n            \"source\": {\n                \"component\": \